…especially when coupled with what is clearly an admission by Mr. Honan, that he is, fundamentally, responsible for enabling the chained series of events that took place:

In the space of one hour, my entire digital life was destroyed. First my Google account was taken over, then deleted. Next my Twitter account was compromised, and used as a platform to broadcast racist and homophobic messages. And worst of all, my AppleID account was broken into, and my hackers used it to remotely erase all of the data on my iPhone, iPad, and MacBook.

In many ways, this was all my fault. My accounts were daisy-chained together. Getting into Amazon let my hackers get into my Apple ID account, which helped them get into Gmail, which gave them access to Twitter. Had I used two-factor authentication for my Google account, it’s possible that none of this would have happened, because their ultimate goal was always to take over my Twitter account and wreak havoc. Lulz.

Had I been regularly backing up the data on my MacBook, I wouldn’t have had to worry about losing more than a year’s worth of photos, covering the entire lifespan of my daughter, or documents and e-mails that I had stored in no other location.

Those security lapses are my fault, and I deeply, deeply regret them.

The important highlighted snippets above are obscured by the salacious title and the bulk of the article which focuses on how services — which he enabled and relied upon — however flawed certain components of that trust and process may have been, are *really* at the center of the debate here. Or ought to be.

There’s clearly a bit of emotional transference occurring. It’s easier to associate causality with a faceless big corporate machine rather than swing the light toward the victim, even if he, himself, self-identifies.

Before you think I’m madly defending and/or suggesting that there weren’t breakdowns with any of the vendors — especially Apple — let me assure you I am not. There are many things that can and should be addressed here, but leaving out the human element, the root of it all here, is dangerous.

I am concerned that as a community there is often an aire of suggestion that consumers are incapable and inculpable with respect to understanding the risks associated with the clicky-clicky-connect syndrome that all of these interconnected services brings.

People give third party applications and services unfettered access to services like Twitter and Facebook every day — even when messages surrounding the potential incursion of privacy and security are clearly stated.

When something does fail — and it does and always will — we vilify the suppliers (sometimes rightfully so for poor practices) but we never really look at what we need to do to prevent having to see this again: “Those security lapses are my fault, and I deeply, deeply regret them.”

The more interconnected things become, the more dependent upon flawed trust models and the expectations that users aren’t responsible we shall be.

It’s unfortunate the the only real way people learn is through misfortune, and any way you look at it, that’s the thing that drives awareness.

There are many lessons we can learn from Mr. Honan’s unfortunate experience…I urge you to consider less focusing blame on one link in the chain and instead guide the people you can influence to reconsider decisions of convenience over the potential tradeoffs they incur.

I don’t know what’s happening behind the scenes at Facebook. Perhaps it’s the manifest evils of Beacon victimizing humanity coming back to haunt them, but there’s been a horrific breach over at Facebook.

I just don’t feel safe there any longer.

I joined FB as a number of groups to which I belong and enjoy monitoring decided to leverage the mass pandemonium that is social netgawking and make their information available only on this populist portal.

So I log on today, fully expecting to check my hatching eggs, play a round of scrabulous and explain to yet another aging filipino male "philanthropist" that I’m really not a 16 year old Catholic school girl trolling for a good time when I discovered the harrowing news:

I’d been SuperPoked!

That’s right.

I was eFingered right after being attacked with a drive-by Vampire tea bagging knee-bar and an inverted trout slap…and some "friend" decided to curse me with a Thunder Pinch and send me a Pink Sock as a gift.

Seriously, what the hell! When did we start talking like this? I just mastered Snoop Dogg’s schizzle, yo! I can’t keep up.

Most of the people that I am FB "friends" with are 30+ years old. What possible reason would any self-respecting "old person" have for superpoking, trout-slapping or thunder-pinching me? I’ve tried talking my wife into this stuff and it doesn’t work unless I liquor her up. How do you think this is going to work on me?

Me!?

If you can’t figure out how to revert to good old-fashioned email or IM, and speak ‘murican, I don’t want to talk to you.

No more hatching Jack-a-lopes. No more Roman Candles. No more Romanian Turtle Wedgies.

My dear friend Murray (sorry if that expression of warmth comes as a surprise, Murr…) sent me this story from the Register:

Polish teen derails tram after hacking train network

A Polish teenager allegedly turned the tram system in the city of
Lodz into his own personal train set, triggering chaos and derailing
four vehicles in the process. Twelve people were injured in one of the
incidents.

The 14-year-old modified a TV remote control so that it could be used to change track points, The Telegraph
reports. Local police said the youngster trespassed in tram depots to
gather information needed to build the device. The teenager told police
that he modified track setting for a prank.

"He studied the trams and the tracks for a long time and then built
a device that looked like a TV remote control and used it to manoeuvre
the trams and the tracks," said Miroslaw Micor, a spokesman for Lodz
police.

"He had converted the television control into a device capable of
controlling all the junctions on the line and wrote in the pages of a
school exercise book where the best junctions were to move trams around
and what signals to change.

"He treated it like any other schoolboy might a giant train set, but
it was lucky nobody was killed. Four trams were derailed, and others
had to make emergency stops that left passengers hurt. He clearly did
not think about the consequences of his actions," Micor added.

Transport command and control systems are commonly designed by
engineers with little exposure or knowledge about security using
commodity electronics and a little native wit. The apparent ease with
which Lodz’s tram network was hacked, even by these low standards, is
still a bit of an eye opener.

Problems with the signalling system on Lodz’s tram network became
apparent on Tuesday when a driver attempting to steer his vehicle to
the right was involuntarily taken to the left. As a result the rear
wagon of the train jumped the rails and collided with another passing
tram. Transport staff immediately suspected outside interference.

The youth, described by his teachers as an electronics buff and
exemplary student, faces charges at a special juvenile court of
endangering public safety. ®

Yes, yes. I know, it’s not a SCADA system…as fun as that would be to bring up again, I don’t need any death threats, so I won’t mention it…directly. But if you read about the recent security design debacle of the Boeing 787 Dreamliner and then look at this, it doesn’t take much of a logic jump to see why we should be worried about how command/control systems are implemented.

My next piece of chicanery is to steal one of Mogull’s Wii Guitar Hero controllers, hack it, and cause it to electrocute his cat every time he hits C# on Stairway to Heaven…

I’ve been trying to construct a palette of blog entries over the last few months which communicates the need for a holistic network, host and data-centric approach to information security and information survivability architectures.

I’ve been paying close attention to the dynamics of the DLP/CMF market/feature positioning as well as what’s going on in enterprise information architecture with the continued emergence of WebX.0 and SOA.

That’s why I found this Computerworld article written by Jay Cline very interesting as it focused on the need for a centralized data governance function within an organization in order to manage risk associated with coping with the information management lifecycle (which includes security and survivability.) The article went on to also discuss how the roles within the organization, namely the CIO/CTO, will also evolve in parallel.

Nothing terribly earth-shattering here, but the exclamation point of this article to enable a centralized data governance organization is a (gasp!) tricky combination of people, process and technology:

"How does this all add up? Let me connect the dots: Data must soon become centralized,its use must be strictly controlled within legal parameters, and information must drive the business model. Companies that don’t put a single, C-level person in charge of making this happen will face two brutal realities: lawsuits driving up costs and eroding trust in the company, and competitive upstarts stealing revenues through more nimble use of centralized information."

Let’s deconstruct this a little because I totally get the essence of what is proposed, butthere’s the insertion of some realities that must be discussed. Working backwards:

I agree that data and it’s use must be strictly controlled within legal parameters.

I agree that a single, C-level person needs to be accountable for the data lifecycle

However, I think that whilst I don’t disagree that it would be fantastic to centralize data,I think it’s a nice theory but the wrong universe.

Interesting, Richard Bejtlich focused his response to the article on this very notion, but I can’t get past a couple of issues, some of them technical and some of them business-related.

There’s a confusing mish-mash alluded to in Richard’s blog of "second home" data repositories that maintain copies of data that somehow also magically enforce data control and protection schemes outside of this repository while simultaneously allowing the flexibility of data creation "locally." The competing themes for me is that centralization of data is really irrelevant — it’s convenient — but what you really need is the (and you’ll excuse the lazy use of a politically-charged term) "DRM" functionality to work irrespective of where it’s created, stored, or used.

Centralized storage is good (and selfishly so for someone like Richard) for performing forensics and auditing, but it’s not necessarily technically or fiscally efficient and doesn’t necessarily align to an agile business model.

The timeframe for the evolution of this data centralization was not really established,but we don’t have the most difficult part licked yet — the application of either the accompanyingmetadata describing the information assets we wish to protect OR the ability to uniformly classify andenforce it’s creation, distribution, utilization and destruction.

Now we’re supposed to also be able to magically centralize all our data, too? I know that large organizations have embraced the notion of data warehousing, but it’s not the underlying data stores I’m truly worried about, it’s the combination of data from multiple silos within the data warehouses that concerns me and its distribution to multi-dimensional analytic consumers.

You may be able to protect a DB’s table, row, column or a file, but how do you apply a policy to a distributed ETL function across multiple datasets and paths?

ATAMO? (And Then A Miracle Occurs)

What I find intriguing about this article is that this so-described pendulum effect of data centralization (data warehousing, BI/DI) and resource centralization (data center virtualization, WAN optimization/caching, thin client computing) seem to be on a direct collision course with the way in which applications and data are being distributed with Web2.0/Service Oriented architectures and delivery underpinnings such as rich(er) client side technologies such as mash-ups and AJAX…

So what I don’t get is how one balances centralizing data when today’s emerging infrastructure and information architectures are constructed to do just the opposite; distribute data, processingand data re-use/transformation across the Enterprise? We’ve already let the data genie out of the bottle and now we’re trying to cram it back in? (*please see below for a perfect illustration)

I ask this again within the scope of deploying a centralized data governance organization and its associated technology and processes within an agile business environment.

/Hoff

P.S. I expect that a certain analyst friend of mine will be emailing me in T-Minus 10, 9…

*Here’s a perfect illustration of the futility of centrally storing "data." Click on the image and notice the second bullet item…:

In May I blogged what I thought was an interesting question regarding the legality and liability of reverse engineering in security vulnerability research. That discussion focused on the reverse engineering and vulnerability research of hardware and software products that were performed locally.

I continued with a follow-on discussion and extended the topic to include security vulnerability research from the web-based perspective in which I was interested to see how different the opinions on the legality and liability were from many of the top security researchers as it relates to the local versus remote vulnerability research and disclosure perspectives.

As part of the last post, I made reference to a working group organized by CSI whose focus and charter were to discuss web security research law. This group is made up of some really smart people and I was looking forward to the conclusions reached by them on the topic and what might be done to potentially solve the obvious mounting problems associated with vulnerability research and disclosure.

Unfortunately, the conclusions of the working group is an inditement of the sad state of affairs related to the security space and further underscores the sense of utter hopelessness many in the security community experience.

What the group concluded after 14 extremely interesting and well-written pages was absolutely nothing:

The meeting of minds that took place over the past two months advanced the group’s collective knowledge on the issue of Web security research law. Yet if one assumed that the discussion advanced the group’s collective understanding of this issue, one might be mistaken.

Informative though the work was, it raised more questions than answers. In the pursuit of clarity, we found, instead, turbidity.

Thus it follows, that there are many opportunities for further thought, further discussion, further research and further stirring up of murky depths. In the short term, the working group has plans to pursue the following endeavors:

Creating disclosure policy guidelines — both to help site owners write disclosure policies, and for security researchers to understand them.

Creating guidelines for creating a "dummy" site.

Creating a more complete matrix of Web vulnerability research methods, written with the purpose of helping attorneys, lawmakers and law enforcement officers understand the varying degrees of invasiveness

Jeremiah Grossman, a friend and one of the working group members summarized the report and concluded with the following: "…maybe within the next 3-5 years as more incidents like TJX occur, we’ll have both remedies." Swell.

Please don’t misunderstand my cynical tone and disappointment as a reflection on any of the folks who participated in this working group — many of whom I know and respect. It is, however, sadly another example of the hamster wheel of pain we’re all on when the best and brightest we have can’t draw meaningful conclusions against issues such as this.

I was really hoping we’d be further down the path towards getting our arms around the problem so we could present meaningful solutions that would make a dent in the space. Unfortunately, I think where we are is the collective shoulder shrug shrine of cynicism perched periously on the cliff overlooking the chasm of despair which drops off into the trough of disillusionment.

One of the benefits of living near Boston is the abundance of amazing museums and historic sites available for visit within 50 miles from my homestead.

This weekend the family and I decided to go hit the Museum of Science for a day of learning and fun.

As we were about to leave, I spied an XP-based computer sitting in the corner of one of the wings and was intrigued by the sign on top of the monitor instructing any volunteers to login:

Then I noticed the highlighted instruction sheet taped to the wall next to the machine:

If you’re sharp enough, you’ll notice that the sheet instructs the volunteer how to remember their login credentials — and what their password is (‘1234’) unless they have changed it!

"So?" you say, "That’s not a risk. You don’t have any usernames!"

Looking to the right I saw a very interesting plaque. It contained the first and last names of the museum’s most diligent volunteers who had served hundreds of hours on behalf of the Museum. You can guess where this is going…

I tried for 30 minutes to find someone (besides Megan Crosby on the bottom of the form) to whom I could suggest a more appropriate method of secure sign-on instructions. The best I could do was one of the admission folks who stamped my hand upon entry and ended up with a manager’s phone number written on the back of a stroller rental slip.

Bear with me here as I admire the sheer elegance and simplicity of what this latest piece of malware uses as its covert back channel: ICMP. I know…nothing fancy, but that’s why I think its simplicity underscores the bigger problem we have in securing this messy mash-up of Internet connected chewy goodness.

When you think about it, even the dopiest of users knows that when they experience some sort of abnormal network access issue, they can just open their DOS (pun intended) command prompt and type "ping…" and then call the helpdesk when they don’t get the obligatory ‘pong’ response.

It’s a really useful little protocol. Good for all sorts of things like out-of-band notifications for network connectivity, unreachable services and even quenching of overly-anxious network hosts.

Network/security admins like it because it makes troubleshooting easy
and it actually forms some of the glue and crutches that folks depend
upon (unfortunately) to keep their networks running…

It’s had its fair share of negative press, sure. But who amongst us hasn’t? I mean, Smurfs are cute and cuddly, so how can you blame poor old ICMP for merely transporting them? Ping of Death? That’s just not nice! Nuke Attacks!? Floods!?

Really, now. Aren’t we being a bit harsh? Consider the utility of it all..here’s a great example:

When I used to go onsite for customer engagements, my webmail access/POP-3/IMAP and SMTP access was filtered. Outbound SSH and other types of port filtering were also usually blocked but my old friend ICMP was always there for me…so I tunneled my mail over ICMP using Loki and it worked great..and it always worked because ICMP was ALWAYS open. Now, today’s IDS/IPS combos usually detect these sorts of tunelling activities, so some of the fun is over.

The annoying thing is that there is really no reason why the entire range of ICMP types need to be open and it’s not that difficult to mitigate the risk, but people don’t because they officially belong to the LBNaSOAC (Lazy Bastard Network and Security Operators and Administrators Consortium.)

However, back to the topic @ hand. I was admiring the simplicity of this newly-found data-stealer trojan that installs itself as an Internet Exploder (IE) browser helper and ultimately captures keystrokes and screen images when accessing certain banking sites and communicates back to the criminal operators using ICMP and a basic XOR encryption scheme. You can read about it here.

It’s a cool design. Right wrong or indifferent, you have to admire the creativity and ubiquity of the back channel…until, of course, you are compromised.

There are so many opportunities for the creative uses of taken-for-granted infrastructure and supporting communication protocols to suggest that this is going to be one hairy, protracted battle.

Submit your vote for the most "clever" use of common protocols/applications for this sort of thing…

Like most folks, I’ve been preoccupied with doing nothing over the last few days, so please excuse the tardiness of this entry. Looks like Alan Shimmel and I are suffering from the same infection of laziness 😉

So, now that the 4 racks of ribs are in the smoker pending today’s festivities celebrating my country’s birth, I find it appropriate to write about this debacle now that my head’s sorted.

When I read this article several days ago regarding the standards that the OMB was "requiring" of federal civilian agencies, I was dismayed (but not surprised) to discover that once again this was another set of toothless "guidelines" meant to dampen the public outrage surrounding the recent string of privacy breaches/disclosures recently.

For those folks whose opinion it is that we can rest easily and put faith in our government’s ability to federalize legislation and enforcement regarding privacy and security, I respectfully suggest that this recent OMB PR Campaign announcement is one of the most profound illustrations of why that suggestion is about the most stupid thing in the universe.

Look, I realize that these are "civilian" agencies of our government, but the last time I checked, the "civilian" and "military/intelligence" arms were at least governed by the same set of folks whose responsibility it is to ensure that we, as citizens, are taken care of. This means that at certain levels, what’s good for the goose is good for the foie gras…kick down some crumbs!

We don’t necessarily need Type 1 encryption for the Dept. of Agriculture, but how about a little knowledge transfer, information sharing and reasonable due care, fellas? Help a brother out!

<sigh>

The article started off well enough…45 days to implement what should have been implemented years ago:

To comply with the new policy, agencies will have to encrypt all data
on laptop or handheld computers unless the data are classified as
"non-sensitive" by an agency’s deputy director. Agency employees also
would need two-factor authentication — a password plus a physical
device such as a key card — to reach a work database through a remote
connection, which must be automatically severed after 30 minutes of
inactivity.

Buahahaha! That’s great. Is the agency’s deputy director going to personally inspect every file, database transaction and email on every laptop/handheld in his agency? No, of course not. Is this going to prevent disclosure and data loss from occuring? Nope. It may make it more difficult, but there is no silver bullet.

Again, this is why data classification doesn’t work. If they knew where the data was and where it was going in the first place, it wouldn’t go missing, now would it? I posted about this very problem here.

Gee, for a $1.50 and a tour of the white house I could have drafted this. In fact, I did in a blog post a couple of weeks ago 😉

But here’s the rub in the next paragraph:

OMB said agencies are expected to have the measures in place within 45
days, and that it would work with agency inspectors general to ensure
compliance. It stopped short of calling the changes "requirements,"
choosing instead to label them "recommendations" that were intended "to
compensate for the protections offered by the physical security
controls when information is removed from, or accessed from outside of
the agency location."

Compensate for the protections offered by the physical security controls!? You mean like the ones that allowed for the removal of data lost in these breaches in the first place!? Jesus.

Most departments and agencies have these measures already in place. We intend to work with the Inspectors General community to review these items as well as the checklist to ensure we are properly safeguarding the information the American taxpayer has entrusted to us. Please ensure these safeguards have been reviewed and are in place within the next 45 days.

Oh really!? Are the Dept. of the Navy, the Dept. of Agricultre, the IRS among those departments who have these measures in place? And I love how polite they can be now that tens of millions of taxpayer’s personal information has been displaced…"Please ensure these safeguards…" Thanks!

Look, grow a pair, stop spending $600 on toilet seats, give these joes some funding to make it stick, make the damned "recommendations" actual "requirements," audit them like you audit the private sector for SoX, and prehaps the idiots running these organizations will take their newfound budgetary allotments and actually improve upon rediculous information security scorecards such as these:

I don’t mean to come off like I’m whining about all of this, but perhaps we should just outsource government agency security to the private sector. It would be good for the economy and although it would become a vendor love-fest, I reckon we’d have better than a D+…

It looks like we’re going to get one of these a day at this point. Here’s the latest breach-du-jour. I guess someone thought that our military veterans were hogging the limelight so active-duty personnel(and their families, no less) get their turn now. From eWeek:

Five spreadsheet files with personal data on approximately 28,000 sailors and family members were found on an open Web site, the U.S. Navy announced June 23.

The personal data included the name, birth date and social security
number on several Navy members and dependents. The Navy said it was
notified on June 22 of the breach and is working to identify and notify
the individuals affected.

"There is no evidence that any of the data has been used illegally.
However, individuals are encouraged to carefully monitor their bank
accounts, credit card accounts and other financial transactions," the
Navy said in a statement.