This was done to bring offline content that was against their censorship policies. Such an attack is possible against any business, service, or organization. This could be done against something as harmless as taking offline any website in the planet, but could also be applied to any critical infrastructure sensor and set of systems – think Internet of Things, Nuclear power plants, 911 phone systems, etc ..

The business and nation state security implications are quite severe here. The reason for the attack was about the 2 types of content – New York Times (banned in China) and information on bypassing the Chinese censorship firewall. Clearly these are not aligned to China leadership.

This attack was executed in the following manner:

the attack was due to HTTP hijacking, and “a certain device at the border of China’s inner network and the Internet has hijacked the HTTP connections went into China, replaced some javascript files from Baidu with malicious ones that would load every two seconds.” Block code execution was also apparently used to prevent looping.

Despite a good deal of articles the common media (WSJ, Bloomberg, etc..) and political response has been lacking compared to the response and support provided to Sony.

My true concern here is that this minor attack (only a few citizens of China are unknowingly having their traffic used to attack a small technology company) is an excellent BETA TEST for a full scale modification of all 1.4B Chinese citizen traffic against critical infrastructure (46% of population was used for GibHub).

Ever find yourself just click click clicking through every message box that pops up? Most people click through a warning (which in the land of Web Browsers usually means STOP DON’T GO THERE!!) in less than 2 seconds. The facts seem to be due to be from habituation – basically, you are used to clicking, and now we have the brain scans to prove it!

What does this mean for you? Well specifically you won’t be able to re-wire your brain, but perhaps you can turn up the settings on your web browser to not allow you to connect to a site that has the issues your web browser is warning against. Simple – let the browser deal with it and take away one nuisance.

From the study:

The MRI images show a “precipitous drop” in visual processing after even one repeated exposure to a standard security warning and a “large overall drop” after 13 of them. Previously, such warning fatigue has been observed only indirectly, such as one study finding that only 14 percent of participants recognized content changes to confirmation dialog boxes or another that recorded users clicking through one-half of all SSL warnings in less than two seconds.

ENISA released a study with a methodology identifying critical infrastructure in communication networks. While this is important and valuable as a topic, I dove into this study for a particularly selfish reason … I am SEEKING a methodology that we could leverage for identifying critical connected infrastructure (cloud providers, SAAS, shared services internally for large corporations, etc..) for the larger public/private sector. Here are my highlights – I would value any additional analysis, always:

Challenge to the organization: “..which are exactly those assets that can be identified as Critical Information Infrastructure and how we can make sure they are secure and resilient?”

Key success factors:

Detailed list of critical services

Criticality criteria for internal and external interdependencies

Effective collaboration between providers (internal and external)

Interdependency angles:

Interdependencies within a category of service

Interdependencies between categories of services

Interdependencies among data assets

Establish baseline security guidelines (due care):

Balanced to business risks & needs

Established at procurement cycle

Regularly verified (at least w/in 3 yr cycle)

Tagging/Grouping of critical categories of service

Allows for clean tracking & regular security verifications

Enables troubleshooting

Threat determination and incident response

Methodology next steps:

Partner with business and product teams to identify economic entity / market value

Identify the dependencies listed about and mark criticality based on entity / market value

Develop standards needed by providers

Investigate how monitoring to standards can be managed and achieved (in some cases contracts can support you, others will be a monopoly and you’ll need to augment their processes to protect you)

Refresh and adjust annually to reflect modifications of business values

I hope this breakout is helpful. The ENISA document has a heavy focused on promoting government / operator ownership, but businesses cannot rely or wait for such action and should move accordingly. The above is heavily modified and original thinking based on my experience with structuring similar business programs. A bit about ENISA’s original intent of the study:

Github is an awesome repository system that is very popular. Basically if you want to work on something (code, a book, electronic files) and then allow others to freely make suggested modifications (think track changes in a Microsoft Word doc), GitHub is the new way of life. I have used on publishing a book, writing code, taking a Python course online, and others are using it at a scale to produce some of the fantastic tools you see online.

I recently saw a post (included below) that clarified how their encryption was setup. Basically encryption allows you to confidentially send data to another party without the fear of others intercepting, stealing, or modifying it. It appears though that for foo.GitHub.io they are presenting the appearance of encryption, but in fact do not have it. Meaning the actual files are sent in the clear.

This is a problem in our structure of security and compliance. Today we have regulations and industry standards that are designed to prescribe specific security safeguards and levels to ensure a baseline amount of security. If organizations don’t meet the true intent of the regulations, do only enough to pass inspection, but create an environment that is susceptible to basic attacks – the user (you and me) are the one’s who suffer.

While it is disappointing for an organization to setup something that clearly creates false trust and checks a box, it is more a call to action for those who operate these systems to embrace pride of the services they are delivering. Much as Steve Jobs desired the insides and outsides of a system to be done correct – the security of an organization should not just look but be right.

We must do better as owners, operators, and security professionals. Trust depends on indicators and expectations being met, and to violate that begs the question… what else is being done in the same manner?

“cben” comment below on github.com issues post:

Turns out there is no end-to-end security even with foo.github.io domain. Got this response from GH support (emphasis mine):

[…opening commentary removed…]

While HTTPS requests may appear to work, our CDN provider is adding and removing the encryption at their end, and then the request is transmitted over the open internet from our CDN provider to our GitHub Pages infrastructure, creating the appearance of trustability.

This is why we do not yet officially support HTTPS for GitHub Pages. We definitely appreciate the feedback and I’ll add a +1 to this item on out internal Feature Request List.

A new study was released by Branden Williams and the Merchants Acquirer Committee (MAC), and it is worth a read. One aspect that jumped to me is the percentage of compliance vs compliant rates shared in the study. The difference here is those who have represented being PCI Compliant through Attestations of Compliance (AOC) vs. those who have had their programs pressure tested by the criminals of the world, and been found wanting.

Here is the snippet from PCI GURU that highlights this state of discrepancy:

The biggest finding of the study and what most people are pointing to is the low compliance percentages across the MAC members’ merchants. Level 1, 2 and 3 merchants are only compliant around 67% to 69% of the time during their assessments. However, most troubling is that Level 4 merchants are only 39% compliant.

Depending on the merchant level, these figures are not even close to what Visa last reported back in 2011. Back then, Visa was stating that 98% of Level 1 merchants were reported as compliant. Level 2 merchants were reported to be at 91% compliance. Level 3 merchants were reported at 57% compliance. As is Visa’s practice, it only reported that Level 4 merchants were at a “moderate” level of compliance.

Board of Directors, CISO, and legal should all care deeply that PCI (and of course and certainly other contractual agreements) security is achieved honestly. To often organizations view this like registering a car with the government. This is far to complex and impactful to people within and outside a given business. The cyber economic connections between proper, efficient, and effective security all lend to better products in the market and more focus on what the business is driving towards.

Is your program honestly secure and fully addressing these least practice principles?

A good article was released on the NYT today highlighting an elongated attack into up to 100 banks where methods were learned by attackers, and then exploited. What is interesting here is that the attackers studied the banks own processes and then customized their behaviors accordingly.

It would be difficult to imagine these campaigns to succeed for such a long period as occurred if the malware was detected (which is possible with interval security process studies), and or the bank processes were re-examined by risk officers for activity within the dollar range thresholds. It is typical for data to be slowly “dripped” out of networks to stay below range (hence when signatures are essentially worthless as a preventive/detective tool), and thus similar fraud behavior is needed at the human/software process level.

I look forward to the report to analyze the campaign and share any possible learnings beyond this surface article. Two highlights of the NYT article jump to me, include:

Kaspersky Lab says it has seen evidence of $300 million in theft from clients, and believes the total could be triple that. But that projection is impossible to verify because the thefts were limited to $10 million a transaction, though some banks were hit several times. In many cases the hauls were more modest, presumably to avoid setting off alarms.

The hackers’ success rate was impressive. One Kaspersky client lost $7.3 million through A.T.M. withdrawals alone, the firm says in its report. Another lost $10 million from the exploitation of its accounting system. In some cases, transfers were run through the system operated by the Society for Worldwide Interbank Financial Telecommunication, or Swift, which banks use to transfer funds across borders. It has long been a target for hackers — and long been monitored by intelligence agencies.