AVAILABLE FOR RE-SCAN

What is this? This page shows the result of a machine-generated analysis of a specific website, which was commissioned by a PrivacyScore user. During the analysis it was checked whether the privacy of the visitors is protected on a technical level as well as possible when visiting the given internet addresses, and whether the operator uses common security mechanisms on the website. This can indicate how seriously an operator takes data protection. However, it is not possible to determine the actual security level achieved. More details please!

How are the results presented? Our analysis focuses on the following aspects: whether tracking services are used ("NoTrack" category), whether selected attacks are prevented, the quality of encryption during data transmission to the website (EncWeb), and the quality when sending e-mails to an existing e-mail server (EncMail).

What exactly is checked and what do the results mean? We check the internet addresses with several techniques, which we have described in detail in a research paper.

What is the purpose of PrivacyScore? With PrivacyScore we make websites publicly comparable in terms of selected properties. As scientists, we are interested in how users and operators deal with this form of transparency. Among other things, this raises the question of whether website operators have an additional incentive to improve their websites.

What can be concluded from the results, what not?

No statement on necessity. The fact that a web page fails at a specific does not automatically mean that sensitive pieces data are at risk. Some security mechanisms are only necessary to protect against strong attacks (e.g., by governments).

Limited expressiveness. The results cover only security mechanisms that can be observed from outside when visiting the specified internet addresses. It is quite possible that an operator uses additional internal protection mechanisms and therefore has decided to leave out some externally visible mechanisms. Furthermore, it is possible that additional security mechanisms are used on individual pages (e.g., for the transmission of passwords). However, such variations are not taken into account in the analysis. Therefore, one cannot conclude from the failure of individual checks that a provider does not handle personal data with sufficient care. On the other hand, however, it is also possible that a website has serious security holes, although it achieves a good result on this page.

NoTrack: No Tracking by Website and Third Parties

Many websites are using services provided by third parties to enhance their websites. However, this use of third parties has privacy implications for the users, as the information that they are visiting a particular website is also disclosed to all used third parties.

Conditions for passing: Test passes if no 3rd party resources are being embedded on the website.

Often, web tracking is done through embedding trackers and advertising companies as third parties in the website. This test checks if any of the 3rd parties are known trackers or advertisers, as determined by matching them against a number of blocking lists (see “conditions for passing”).

Conditions for passing: Test passes if none of the embedded 3rd parties is a known tracker, as determined by a combination of three common blocking rulesets for AdBlock Plus: the EasyList, EasyPrivacy and Fanboy’s Annoyance List (which covers social media embeds).

Reliability: reliable.

Potential scan errors: Due to modifications to the list to make them compatible with our system, false positives may be introduced in rare conditions (e.g., if rules were blocking only specific resource types).

Cookies can also be set by third parties that are included in the website. This test counts 3rd party cookies, and matches them against the same tracker and advertising lists that the 3rd party tests use.

Conditions for passing: The test will pass if no cookies are set by third parties.

We obtain the IP addresses of the domain and look up its country in a GeoIP database. It is believed that personal data is protected better, if a website is hosted in a country that implements the European General Data Protection Directive (GDPR). We plan to offer more flexible geo-location tests in the future.

Conditions for passing: The test passes if all IP addresses (A records) are found to be in countries that implement the GDPR.

Reliability: unreliable. We perform a single DNS lookup for the A records of the domain name of the respective site. Due to DNS round robin configurations, we may not see all IP addresses that are actually used by a site. Furthermore, if the site uses content delivery networks or anycasting the set of addresses we observe may differ from the set for other users. We look up the IP addresses within a local copy of a GeoIP database. We use the GeoLite2 data created by MaxMind, available from http://www.maxmind.com.

Potential scan errors: The result may be incorrect for the following reasons. First, we may miss some IP addresses and therefore our results may be incomplete (causing the test to pass while it shouldn’t). Second, we may see a set of IP addresses that is biased due to the location of our scanning servers (all of them are currently in Germany), which may again cause the test to pass while it shouldn’t. Therefore, the results may be wrong for users located in other countries. Third, the determination of the geo-location of IP addresses is known to be imperfect. This may cause the test to fail or succeed where it shouldn’t.

We obtain the IP addresses of the mail server record(s) associated with the domain and look up its country in a GeoIP database. It is believed that personal data is protected better, if a website is hosted in a country that implements the European General Data Protection Directive (GDPR). We plan to offer more flexible geo-location tests in the future.

Conditions for passing: The test passes if all IP addresses associated with the MX records are found to be in countries that implement the GDPR. This test is neutral if there are no MX records.

Reliability: unreliable. We perform a single DNS lookup for the MX records of the domain name of the respective site. Then we obtain all A records of each MX record. Due to DNS round robin configurations, we may not see all IP addresses that are actually used by a site. Furthermore, if the site uses content delivery networks or anycasting the set of addresses we observe may differ from the set for other users. We look up the IP addresses within a local copy of a GeoIP database. We use the GeoLite2 data created by MaxMind, available from http://www.maxmind.com. Finally, we only check mail servers found in MX records. Therefore, we miss sites where the domain does not have MX records, but mail is directly handled by a mail server running on the IP address given by its A record.

Potential scan errors: The result may be incorrect for the following reasons. First, we may miss some IP addresses and therefore our results may be incomplete (causing the test to pass while it shouldn’t). Second, we may see a set of IP addresses that is biased due to the location of our scanning servers (all of them are currently in Germany), which may again cause the test to pass while it shouldn’t. Therefore, the results may be wrong for users located in other countries. Third, the determination of the geo-location of IP addresses is known to be imperfect. This may cause the test to fail or succeed where it shouldn’t.

Some site owners outsource hosting of mail or web servers to specialized operators that are located in a foreign country. Some users may find it surprising that web and mail traffic is not handled in the same fashion and in one of the two cases their traffic is transferred to a foreign country.

Conditions for passing: Test passes if the set of countries where the web servers are located matches the set of countries where the mail servers associated with the domain are located. If there are no MX records this test is neutral.

Reliability: unreliable. See GEOMAIL check.

Potential scan errors: See GEOMAIL check. This check may wrongly be recorded as "failed", if one of the servers is found to be located in the country "Europe", which is due to peculiarities of how MaxMind records geolocations.

To protect their users, websites offering HTTPS should automatically redirect visitors to the secure version of the website if they visit the unsecured version, as users cannot be expected to change the address by hand. This test verifies that this is the case. If the browser is redirected to a secure URL, all other HTTPS tests use the final URL.

Conditions for passing: Test passes if the server automatically redirects the browser to an HTTPS URL when the browser requests a HTTP URL. Neutral if the given URL is already an HTTPS URL.

Reliability: reliable.

Potential scan errors: If users are redirected to the HTTPS version using JavaScript, this test may not detect it.
Scan Module: OpenWPM

This HTTP header prevents adversaries from eavesdropping on encrypted connections. HSTS allows a site to tell the browser that it should only be retrieved encryptedly via HTTPS. This decreases the risk of a so-called SSL Stripping attack.

Conditions for passing: The header is set on the HTTPS URL that is reached after following potential redirects.

Reliability: unreliable. We only evaluate this header for the HTTPS URL to which a site redirects upon visit. We rely on the result of testssl.sh to evaluate the validity of the header. Under certain circumstances, a website may be protected without setting its own HSTS header, e.g. subdomains whose parent domain has a HSTS preloading directive covering subdomains - this will not be detected by this test, but will show up in the HSTS Preloading test.

Potential scan errors: We may miss security problems on sites that redirect multiple times. We may also miss security problems on sites that issue multiple requests to different servers in order to render the resulting page but forget to set the header in all responses. We may miss the presence of HSTS if redirection is not performed with the HTTP Location header but with JavaScript.

HSTS Preloading further decreases the risk of SSL Stripping attacks. To this end the information that a site should only be retrieved via HTTPS is stored in a list that is preloaded with the browser. This prevents SSL Stripping attacks during the very first visit of a site. To allow inclusion in the HSTS preloading lists, the servers need to indicate that this inclusion is acceptable.

Conditions for passing: The Server indicates it is ready for HSTS preloading, or is already part of the HSTS preloading list.

Reliability: unreliable. We only evaluate this header for the HTTPS URL to which a site redirects upon visit. We will miss preloading indicators on higher-level domains (e.g. example.com if the provided domain was www2.example.com).

Potential scan errors: We may miss security problems on sites that redirect multiple times. We may also miss security problems on sites that issue multiple requests to different servers in order to render the resulting page but forget to set the header in all responses. We may miss the presence of HSTS if redirection is not performed with the HTTP Location header but with JavaScript.

HSTS Preloading further decreases the risk of SSL Stripping attacks. To this end the information that a site should only be retrieved via HTTPS is stored in a list that is preloaded with the browser. This prevents SSL Stripping attacks during the very first visit of a site.

Conditions for passing: The final URL is part of the current Chromium HSTS preload list, or one of its parent domains is and has “include-subdomains” set to true.

Reliability: unreliable. We only evaluate this header for the HTTPS URL to which a site redirects upon visit. We also do not evaluate if the HSTS policy actually has force-https set to true.

Potential scan errors: We may miss security problems on sites that redirect multiple times. We may also miss security problems on sites that issue multiple requests to different servers in order to render the resulting page but forget to set the header in all responses. We may miss the presence of HSTS if redirection is not performed with the HTTP Location header but with JavaScript.

This HTTP header ensures that outsiders cannot tamper with encrypted transmissions. With HPKP sites can announce that the cryptographic keys used by their servers are tied to certain certificates. This decreases the risk of man-in-the-middle attacks of adversaries who use a forged certificates. However, opinions about the usefulness and risks of this functionality differ widely among experts. This check is informational only and does not influence the ranking of the website.

Conditions for passing: The Public-Key-Pins header is present and the certificate hashes in the header can be matched against the certificate presented during the TLS handshake.

Reliability: unreliable. We only evaluate this header for the HTTPS URL to which a site redirects upon visit. We rely on the result of testssl.sh to evaluate the validity of the pins.

Potential scan errors: We may miss security problems on sites that redirect multiple times. We may miss the presence of HPKP if redirection is not performed with the HTTP Location header but with JavaScript.

SSL 3.0 is a deprecated encryption protocol with known vulnerabilities. Encrypted connections that use SSL 3.0 are vulnerable to the so-called POODLE attack. This allows adversaries to steal sensitive pieces of information such as session cookies that are transferred over a connection.

Conditions for passing: Test passes if the server does not offer the SSL 3.0 protocol. Neutral if the server does not offer encryption at all or if the server cannot be reached.

TLS 1.0 is a legacy encryption protocol that does not support the latest cryptographic algorithms. From a security perspective, it would be desirable to disable TLS 1.0 support. However, many sites still offer TLS 1.0 in order to support legacy clients, although, as of 2014, most contemporary web browsers support at least TLS 1.1. Furthermore, the PCI DSS 3.2 standard mandates that sites that process credit card data remove support for TLS 1.0 by June 2018.

Informational check: As TLS 1.0 is neither desireable nor completely deprecated, this test is informational and will always be neutral.

TLS 1.1 is an outdated encryption protocol that does not support the latest cryptographic algorithms. From a security perspective, it would be desirable to disable TLS 1.1 support in favor of TLS 1.2. However, there are still many clients that are not compatible with TLS 1.2

Informational check: At the moment, we show the result of this check for informational purposes only. The result of this check does not influence the rating and ranking.

If HTTPS websites include content from HTTP sites, this opens the website to additional attacks. This 'mixed content' will also be blocked by modern browsers, which may lead to problems in how the website is displayed.

Conditions for passing: Test passes if the website does not use mixed content. If the server does not offer HTTPS, the test is neutral.

Informational check: Test passes if the server is not vulnerable to this bug. As mitigations exist that cannot be detected automatically, the result will be neutral if the attack is detected to be present. The result is also neutral if the server does not offer encryption at all or if the server cannot be reached.

Informational check: Test passes if the server is not vulnerable to this bug. As no mitigations exist that do not break backwards-compatibility with most old clients, we will not actively penalize servers for this vulnerability at the moment, however this may change in the future. The result is also neutral if the server does not offer encryption at all or if the server cannot be reached.

Informational check: Test passes if the server is not vulnerable to this bug. As mitigations exist that cannot be detected automatically, the result will be neutral if the attack is detected to be present. The result is also neutral if the server does not offer encryption at all or if the server cannot be reached.

Attacks: Protection Against Various Attacks

Web servers may be configured incorrectly and expose private information on the public internet. This test looks for a series of common mistakes: Exposing the "server-status" or "server-info" pages of the web server, common debugging files that may have been forgotten on the server, and the presence of version control system files from the Git or SVN systems, which may contain private or security-critical information.

Conditions for passing: No leaks have been detected.

Reliability: unreliable. The detection is not completely reliable, as we can only check for certain indicators of problems. This test may result in both false positives (claiming that a website is insecure where it isn't) and false negatives (claiming that a website is secure where it isn't).

Potential scan errors: We only check for leaks at specific, pre-defined paths. If The website exposes information in other places, we may not detect it.

This HTTP header helps to prevent Cross-Site-Scripting attacks. With CSP, a site can whitelist servers from which it expects its content to be loaded. This prevents adversaries from injecting malicious scripts into the site.

Conditions for passing: The Content-Security-Policy header is present.

Reliability: shallow. At the moment we only check for this header in the response that belongs to the first request for the final URL (after following potential redirects to other HTTP/HTTPS URLs). Furthermore, we only report whether the header is set or not, i.e., we do not analyze whether the content of the header makes sense.

Potential scan errors: We may miss security problems on sites that redirect multiple times. We may also miss security problems on sites that issue multiple requests to render the resulting page but forget to set the header in all responses.

This HTTP header prevents adversaries from embedding a site for malicious purposes. XFO allows a site to tell the browser that it is not acceptable to include it within a frame from another server. This decreases the risk of click-jacking attacks.

Conditions for passing: The X-Frame-Options header is present and set to “SAMEORIGIN” (as recommended by securityheaders.io).

Reliability: shallow. At the moment we only check for this header in the response that belongs to the first request for the final URL (after following potential redirects to other HTTP/HTTPS URLs).

Potential scan errors: We may miss security problems on sites that redirect multiple times. We may also miss security problems on sites that issue multiple requests to render the resulting page but forget to set the header in all responses.

This HTTP header prevents certain cross-site scripting (XSS) attacks. Browsers are instructed to stop loading the page when they detect reflective XSS attacks. This header is useful for older browsers that do not support the more recent Content Security Policy header yet.

Conditions for passing: The X-XSS-Protection HTTP header is present and set to “1; mode=block” (which is the best policy and also recommended by the scan service securityheaders.io).

Reliability: unreliable. At the moment we only check for this header in the response that belongs to the first request for the final URL (after following potential redirects to other HTTP/HTTPS URLs).

Potential scan errors: We may miss security problems on sites that redirect multiple times. We may also miss security problems on sites that issue multiple requests to render the resulting page but forget to set the header in all responses.

This HTTP header prevents browsers from accidentally executing code. Browsers are instructed to interpret all objects received from a server according to the MIME type set in the Content-Type HTTP header. Traditionally, browsers have tried to guess the content type based on the content, which has been exploited by attackers to make browsers execute malicious code.

Conditions for passing: The X-Content-Type-Options HTTP header is present and set to “nosniff”.

Reliability: unreliable. At the moment we only check for this header in the response that belongs to the first request for the final URL (after following potential redirects to other HTTP/HTTPS URLs).

Potential scan errors: We may miss security problems on sites that redirect multiple times. We may also miss security problems on sites that issue multiple requests to render the resulting page but forget to set the header in all responses.

A secure referrer policy prevents the browser from disclosing the URL of the current page to other pages. Without a referrer policy most browsers send a Referer header whenever content is retrieved from third parties or when you visit a different page by clicking on a link. This may disclose sensitive information.

Conditions for passing: Referrer-Policy header is present. Referrer-Policy is set to “no-referrer” (which is the only recommended policy recommended by dataskydd.net in their Webbkoll scan service).

Reliability: unreliable. At the moment we only check for this header in the response that belongs to the first request for the final URL (after following potential redirects to other HTTP/HTTPS URLs).

Potential scan errors: We may miss security problems on sites that redirect multiple times. We may also miss security problems on sites that issue multiple requests to render the resulting page but forget to set the header in all responses. We fail to detect a referrer policy that is set via the “referer” HTTP-EQUIV META tag in the HTML code.

SSL 3.0 is a deprecated encryption protocol with known vulnerabilities. Encrypted connections that use SSL 3.0 are vulnerable to the so-called POODLE attack. This allows adversaries to steal sensitive pieces of information such as session cookies that are transferred over a connection.

Conditions for passing: Test passes if the server does not offer the SSL 3.0 protocol. Neutral if the server does not offer encryption at all or if the server cannot be reached.

TLS 1.0 is a legacy encryption protocol that does not support the latest cryptographic algorithms. From a security perspective, it would be desirable to disable TLS 1.0 support. However, many sites still offer TLS 1.0 in order to support legacy clients, although, as of 2014, most contemporary web browsers support at least TLS 1.1. Furthermore, the PCI DSS 3.2 standard mandates that sites that process credit card data remove support for TLS 1.0 by June 2018.

Informational check: As TLS 1.0 is neither desireable nor completely deprecated, this test is informational and will always be neutral.

TLS 1.1 is an outdated encryption protocol that does not support the latest cryptographic algorithms. From a security perspective, it would be desirable to disable TLS 1.1 support in favor of TLS 1.2. However, there are still many clients that are not compatible with TLS 1.2

Informational check: At the moment, we show the result of this check for informational purposes only. The result of this check does not influence the rating and ranking.

Informational check: Test passes if the server is not vulnerable to this bug. As mitigations exist that cannot be detected automatically, the result will be neutral if the attack is detected to be present. The result is also neutral if the server does not offer encryption at all or if the server cannot be reached.

Informational check: Test passes if the server is not vulnerable to this bug. As no mitigations exist that do not break backwards-compatibility with most old clients, we will not actively penalize servers for this vulnerability at the moment, however this may change in the future. The result is also neutral if the server does not offer encryption at all or if the server cannot be reached.

Informational check: Test passes if the server is not vulnerable to this bug. As mitigations exist that cannot be detected automatically, the result will be neutral if the attack is detected to be present. The result is also neutral if the server does not offer encryption at all or if the server cannot be reached.

About

PrivacyScore is a website scanning tool that allows anyone to benchmark security and privacy features of websites.
Rankings are public and can be configured to one's preferences.
PrivacyScore helps users, activists, and data protection authorities.

Feedback

We are curious to receive your feedback.
Please do not hesitate to contact us if you observe any errors.
Site owners can request to exclude their sites from future scans.

Beta Status

PrivacyScore is currently in public beta.
While we're giving our very best, we currently cannot guarantee the accuracy of the displayed results and rankings.