Search

Subscribe

Comprehensive National Cybersecurity Initiative

On Tuesday, the White House published an unclassified summary of its Comprehensive National Cybersecurity Initiative (CNCI). Howard Schmidt made the announcement at the RSA Conference. These are the 12 initiatives in the plan:

Initiative #1. Manage the Federal Enterprise Network as a single network enterprise with Trusted Internet.

Initiative #2. Deploy an intrusion detection system of sensors across the Federal enterprise.

Initiative #12. Define the Federal role for extending cybersecurity into critical infrastructure domains.

While this transparency is a good, in this sort of thing the devil is in the details -- and we don't have any details. We also don't have any information about the legal authority for cybersecurity, and how much the NSA is, and should be, involved. Good commentary on that here. EPIC is suing the NSA to learn more about its involvement.

sounds like an interesting idea too bad if it works it opens the door to an absolutely massive firesale (to quote hollywood) . while i applaude the gov for trying to fix its brokenness this is it incarnate folks

Initiative #7: Classified Networks are about as secure as they can be. These are well protected using NSA-approved encryption devices.

Problems arising out of classified networks almost always involve human error or a user willfully disregarding the security rules. The only way to improve the security of these is to put the money toward education and rule enforcement.

OMG the Politico's have woken up and they think the stable door is open, not Pandora's casket.

#4 + #11 means "lets re-invent the Internet" and give it a "rainbow coloured hue" and thus bring it to heel...

Look on it as the Defense Industry reinvents it's self on the "second life" model, Let's call it "Tar warts 3".

If you thought 1200USD invoice for a 4USD lump hammer was bad value for money...

Essentialy this means open a bottomless pit in the ground and keep pouring tax payers money in.

There are two types of R&D we generaly see in industry,

1, Blue sky.
2, Product improvment.

And they aproximate to Primary and secondary patents respectivly.

Both generaly get crushed under foot by "red tape" when you try to put them in a "risk managment" staight jacket of a "managed" supply chain.

It also has all sorts of knock on effects in global trade restrictions.

The trick is to use the European Model not the US model, that is frameworks -v- specifics.

Frameworks generaly define specific interfaces at functional boundries which encorages a "modular" aproach which allows rapid inovation, test and update. The end to end method generaly encorages a "monolithic" aproach which discorages inovation because it's complex and slow to test and the whole gets replaced not a part.

This is especialy true when things are subject to unexpected change, It is a case of Mammals -v- Dinosaurs.

The hard part is designing frameworks that don't include "hard coded protocols" or "implicit assumptions" and thus treats exceptions and inovation as "expected" not "unexpected".

From my experience, #1 is a horrible idea. This will create more bureaucracy. In such organizations it only takes one "expert" in the right place in an organization to cause a lot of headache and in some cases security incidents and downtime. I have seen it in my own organization. A little knowledge is a dangerous thing, especially when it is used to make decisions.

"Computer scientists say they’ve discovered a “severe vulnerability” in the world’s most widely used software encryption package that allows them to retrieve a machine’s secret cryptographic key."

It looks genuine enough but a few more details always helps ;)

It looks like a variation of the late 1990's Differential Power Analysis attack on smart cards.

Basically it's a form of time realted side channel attack.

When software branches etc it takes fractionaly different time periods or can trigger a cache reload etc all of which shows up on the power supply lines in one form or another.

You just have to dig your wanted signal out of all the other system noise. The simplest way to do this is to run the same thing over and over again and use a known trigger point to average out the unwanted noise and average up the wanted signal (in theory unsynchronised noise goes down in proportion to the square root of the number of samples you average so 6dB improvment for every doubling of the number of samples).

It is not just on the power supply lines you can see this.

I think it was Peter Guttman who showed a network packet time stamp attack on an AES key across the network due to cache hits.

If you remove "brain dead" software implementations. The two biggies that let security down are,

1, Side Channels.
2, Protocol Errors.

SSL has been hit by both of late and as has been pointed out by many SSL/TSL and related protocols are the foundation of Internet security and have been for neigh on 20years...

One can probably make the assumption that the likes of the NSA / GCHG et al have been aware of this for the last ten years or so at the very least.

And of course their life is made so much easier by "standard plaintext" in file headers such as MS Office documents etc.

Oh and then there is a new kid on the block for researchers to get their heads around which is "RF Fault Injection Attacks".

If you add the three types of attack together then this fault could be easily excersisable from outside the server room and seen on the network...

@nanom -
The gummint has a duty and obligation to protect it's networks and systems from exploitation. After all they have our data in there too.

The first model "It's unclassified who cares" failed.

The second model "We hired system administrators and they know everything necessary to secure systems" failed.

The third model "'Make the executive do security.' and all the agencies implemented security as they interpreted it." failed.

Reducing and controlling all the access points (and I have seen senior GS's setup internet gateways on their desktop--well it was a box under the desk.) to a select handful and watching that handful is logical. The agencies that did security right did this.

Concentrating intrusion detection and incident handling within a group that has time and resources to get really good at analysis and response is logical. I've seen "Incident handlers" who were collateral duty people. Which means their real job was server or database administration.

Einstein now (well E2 actually) different matter. I was an end user for a year and it was either really good or really weak mostly I saw hits for keyloggers on end users PC's (these would be citizens). My question was, and is, how did it get inside my TLS tunnel? Based on the nature of the intercept there was data that was only passed within the tunnel.

If it's a proxy it's one mighty big proxy. If it's mirroring traffic how did it get my key? Nobody i interviewed admitted/remembered giving it up and we didn't have an MOU.