Attribution is hard. It’s as much art as it is science. It’s also very misunderstood.

So, as part of my public service initiative, I created and then unintentionally crowdsourced the most definitive collection of reality-based constructs reflecting the current state of this term of art.

Here you go:

Faptribution => The process of trying to reach PR climax on naming an adversary before anyone else does

Pattribution => The art of self-congratulatory back patting that goes along with attributing an actor(s) to a specific campaign or breach.

Flacktribution => The process of dedicating your next press release to the concept that, had the victim only used $our_software, none of this would have happened. (Per Nick Selby)

Maptribution => when you really just have no fucking idea and play “pin the tail on the donkey” with a world map. (Per Sam Johnston)

At the 2015 Kaspersky Security Analyst Summit, I kicked off the event with a keynote titled: “Active Defense and the A.R.T. of W.A.R.”

The A.R.T. of W.A.R. stands for “Active Response Techniques of Weaponization and Resilience.”

You can read about some of what I discussed here. I will post the presentation shortly and Kaspersky will release the video also. The video of my talk is here (I am walking out, hoodie up, like I’m in a fight per the show thematic):

While thematically I used the evolution of threat actors, defensive security practices, operations and technology against the backdrop of the evolution of modern mixed martial arts (the theme of the conference,) the main point was really the following:

If we now face threat actors who have access to the TTPs of nation states, but themselves are not, and they are attacking enterprises who do not/cannot utilize these TTPs, and our only current “best practices” references against said actors are framed within the context of “cyberwar,” and only able to be acted upon by representatives of a nation state, it will be impossible for anyone outside of that circle to actively defend our interests, intellectual property and business with an appropriate and contextualized framing of the use of force.

It is extremely easy to take what I just mentioned above and start picking it apart without the very context to which I referenced.

The notion of “Active Defense” is shrouded in interpretive nuance — and usually immediately escalates to the most extreme use case of “hacking back” or “counter-hacking.” As I laid out in the talk — leaning heavily on the work of Dave Dittrich in this area — there are levels of intrusion as well as levels of response, and the Rubik’s Cube of choices allows for ways or responding that includes more than filing a breach report and re-imaging endpoints.

While the notion of “active” and “passive” are loaded terms without context, I think it’s important that we — as the technical community — be allowed to specifically map those combinations of intrusion and response and propose methodologies against which air cover of legal frameworks and sovereignty can be laid. Not having this conversation is unacceptable.

Likewise unacceptable is the disingenuous representation that organizations (in the private sector) who specialize in one of the most important areas of discussion here — attribution — magically find all their information by accident on Pastebin. Intelligence — threat, signals, human, etc. — is a very specialized and delicate practice, but as it stands today, there 4-5 companies who operate in this space with ties to the public sector/DoD/IC and are locked in their own “arms race” to be the first to attribute a name, logo and theme song around every attack.

It’s fair to suggest they operate in spaces along to continuum that others do not. But these are things we really don’t talk about because it exists in the grey fringe.

Much of that information and sources are proprietary and while we see executive orders and governmental offices being spun up to exchange “threat intelligence,” the reality is that even if we nail attribution, there’s nothing most of us can do about it…and I mean that technologically and operationally.

We have documents such as the Tallin Manual and the Army Cyber Command Field Manual for Electromagnetic Warfare that govern these discussion in their realms — yet in the Enterprise space, we have only things like the CFAA.

This conversation needs to move forward. It’s difficult, it’s hairy and it’s going to take a concerted effort…but it needs a light shone upon it.

programmable abstraction and the operational models to support it has been proven at scale

virtualization and virtualized services are now common place architectural primitives in discussions for NG networking

Open Source is huge in both orchestration as well as service delivery

Entirely new network operating systems like that of Cumulus have emerged to challenge incumbents

SDN, NFV and overlays are starting to see production at-scale adoption beyond PoCs

automation is starting to take root for everything from provisioning to orchestration to dynamic service insertion and traffic steering

Stir in the profound scale-out requirements of mega-scale web/cloud providers and the creation and adoption of Open Compute Platform compliant network, storage and compute platforms, and there’s a real revolution going on:

The Open Compute Networking Project is creating a set of technologies that are disaggregated and fully open, allowing for rapid innovation in the network space. We aim to facilitate the development of network hardware and software – together with trusted project validation and testing – in a truly open and collaborative community environment.

We’re bringing to networking the guiding principles that OCP has brought to servers & storage, so that we can give end users the ability to forgo traditional closed and proprietary network switches – in favor of a fully open network technology stack. Our initial goal is to develop a top-of-rack (leaf) switch, while future plans target spine switches and other hardware and software solutions in the space.

Now, interestingly, while there are fundamental shifts occurring in the approach to and operations of security — the majority of investment in which is still network-centric — as an industry, we are still used to buying our security solutions as closed appliances or chassis form-factors from vendors with integrated hardware and software.

While vendors offer virtualized versions of their hardware solutions as virtual appliances that can also run on bare metal, they generally have not enjoyed widespread adoption because of the operational challenges involved with the operationally-siloed challenges involved in distinguishing the distribution of security as a service layer across dedicated appliances or across compute fabrics as an overlay.

But let’s just agree that outside of security, software is eating the world…and that at some point, the voracious appetite of developers and consumers will need to be sated as it relates to security.

Much of the value (up to certain watermark levels of performance and latency) of security solutions is delivered via software which when coupled with readily-available hardware platforms such as x86 with programmable merchant silicon, can provide some very interesting and exciting solutions at a much lower cost.

So why then, like what we’ve seen with networking vendors who have released OCP-compliant white-box switching solutions that allow end-users to run whatever software/NOS they desire, have we not seen the same for security?

I think it would be cool to see an OCP white box spec for security and let the security industry innovate on the software to power it.

Ordinarily, these two events would not be related except I was also tracking down a local disk utilization issue that was vexing me as on a day-to-day basis as my local SSD storage would ephemerally increase/decrease by GIGABYTES and I couldn’t figure out why.

So this evening, quite literally as I was reading RSnake’s interesting blog post titled “So your nude selfies were just hacked,” a Growl notification popped up informing me that several new Dropbox files were completing synchronization.

Puzzled because I wasn’t aware of any public shares and/or remote folders I was synching, I checked the Dropbox synch status and saw a number of files that were unfamiliar — and yet the names of the files certainly piqued my interest…they appeared to belong to a very good friend of mine given their titles. o_O

I checked the folder these files were resting in — gigabytes of them — and realized it was a shared folder that I had setup 3 years ago to allow a friend of mine to share a video from one of our infamous Jiu Jitsu smackdown sessions from the RSA Security Conference. I hadn’t bothered to unshare said folder for years, especially since my cloud storage quota kept increasing while my local storage didn’t.

As I put 1 and 1 together, I realized that for at least a couple of years, Jeremiah (Grossman) had been using this dropbox share folder titled “Dropit” as a repository for file storage, thinking it was HIS!

This is why gigs of storage was appearing/disappearing from my local storage when he added/removed files, but I didn’t see the synch messages and thus didn’t see the filenames.

I jumped on Twitter and engaged Jer in a DM session (see below) where I was laughing so hard I was crying…he eventually called me and I walked him through what happened.

Once we came to terms of what had happened, how much fun I could have with this, Jer ultimately copied the files off the share and I unshared the Dropbox folder.

We agreed it was important to share this event because like previous issues each of us have had, we’re all about honest disclosure so we (and others) can learn from our mistakes.

The lessons learned?

Dropbox doesn’t make it clear whether a folder that’s shared and mounted is yours or someone else’s — they look the same.

Ensure you know where your data is synching to! Services like Dropbox, iCloud, Google Drive, SkyDrive, etc. make it VERY easy to forget where things are actually stored!

Check your logs and/or enable things like Growl notifications (on the Mac) to ensure you can see when things are happening

Unshare things when you’re done. Audit these services regularly.

Even seasoned security pros can make basic security/privacy mistakes; I shared a folder and didn’t audit it and Jer put stuff in a folder he thought was his. It wasn’t.

Never store nudie pics on a folder you don’t encrypt — and as far as I can tell, Jer didn’t…but I DIDN’T CLICK…HONEST!

Jer and I laughed our asses off, but imagine if this had been confidential information or embarrassing pictures and I wasn’t his friend.

Rich Mogull (Securosis) and I have given a standing set of talks over the last 5-6 years at the RSA Security Conference that focus on innovation, disruption and ultimately making security practitioners more relevant in the face of all this churn.

We’ve always offered practical peeks of what’s coming and what folks can do to prepare.

This year, we (I should say mostly Rich) built a bunch of Ruby code that leveraged stuff running in Amazon Web Services (and using other Cloud services) to show how security folks with little “coding” capabilities could build and deploy this themselves.

Specifically, this talk was about SecDevOps — using principles that allow for automated and elastic cloud services to do interesting security things that can be leveraged in public and private clouds using Chef and other assorted mechanisms.

I also built a bunch of stuff using the RackSpace Private Cloud stack and Chef, but didn’t have the wherewithal or time to demonstrate it — and doing live demos over a tethered iPad connection to AWS meant that if it sucked, it was Rich’s fault.

You can find the presentation here (it clearly doesn’t include the live demos):

The insufferable fatigue of imprecise language with respect to “stopping” DDoS attacks caused me to tweet something that my pal @CSOAndy suggested was just as pedantic and wrong as that against which I railed:

I think it's fair to say that you can't "stop" a DDoS attack unless you can dispatch the endpoints used for its bidding. "Weather," perhaps

My point, ultimately, is that in the context of DDoS mitigation such as offload scrubbing services, unless one renders the attacker(s) from generating traffic, the attack is not “stopped.” If a scrubbing service redirects traffic and absorbs it, and the attacker continues to send packets, the “attack” continues because the attacker has not been stopped — he/she/they have been redirected.

Now, has the OUTCOME changed? Absolutely. Has the intended victim possibly been spared the resultant denial of service? Quite possibly. Could there even now possibly be extra “space in the pipe?” Uh huh.

Has the attack “stopped” or ceased? Nope. Not until the spice stops flowing.

During the 2014 RSA Conference, I participated on a repeating panel with Bret Hartman, CTO of Cisco’s Security Business Unit and Martin Brown from BT. The first day was moderated by Jon Olstik while the second day, the three of us were left to, um, self-moderate.

It occurred to me that during our very lively (and packed) second day wherein the audience was extremely interactive, I should boost the challenge I made to the audience on day one by offering a little monetary encouragement in answering a question.

Since the panel was titled “Network Security Smackdown: Which Technologies Will Survive?,” I offered a $20 kicker to anyone who could come up with a legitimate counter example — give me one “network security” technology that has actually gone away in the last 20 years.

<chirp chirp>

Despite Bret trying to pocket the money and many folks trying valiantly to answer, I still have my twenty bucks.

So-called Next Generation Firewalls (NGFW) are those that extend “traditional port firewalls” with the added context of policy with application visibility and control to include user identity while enforcing security, compliance and productivity decisions to flows from internal users to the Internet.

NGFW, as defined, is a campus and branch solution.Campus and Branch NGFW solves the “inside-out” problem — applying policy from a number of known/identified users on the “inside” to a potentially infinite number of applications and services “outside” the firewall, generally connected to the Internet. They function generally as forward proxies with various network insertion strategies.

Campus and Branch NGFW is NOT a Data Center NGFW solution.

Data Center NGFW is the inverse of the “inside-out” problem. They solve the “outside-in” problem; applying policy from a potentially infinite number of unknown (or potentially unknown) users/clients on the “outside” to a nominally diminutive number of well-known applications and services “inside” the firewall that are exposed generally to the Internet. They function generally as reverse proxies with various network insertion strategies.

Campus and Branch NGFWs need to provide application visibility and control across potentially tens of thousands of applications, many of which are evasive.

Data Center NGFWs need to provide application visibility and control across a significantly fewer number of well-known managed applications, many of which are bespoke.

There are wholesale differences in performance, scale and complexity between “inside-out” and “outside-in” firewalls. They solve different problems.

The things that make a NGFW supposedly “special” and different from a “traditional port firewall” in a Campus & Branch environment are largely irrelevant in the Data Center. Speaking of which, you’d find it difficult to find solutions today that are simply “traditional port firewalls”; the notion that firewalls integrated with IPS, UTM, ALGs, proxies, integrated user authentication, application identification/granular control (AVC), etc., are somehow incapable of providing the same outcome is now largely a marketing distinction.

While both sets of NGFW solutions share a valid deployment scenario at the “edge” or perimeter of a network (C&B or DC,) a further differentiation in DC NGFW is the notion of deployment in the so-called “core” of a network. The requirements in this scenario mean comparing the deployment scenarios is comparing apples and oranges.

Firstly, the notion of a “core” is quickly becoming an anachronism from the perspective of architectural references, especially given the advent of collapsed network tiers and fabrics as well as the impact of virtualization, cloud and network virtualization (nee SDN) models. Shunting a firewall into these models is often difficult, no matter how many interfaces. Flows are also asynchronous and often times stateless.

Traditional Data Center segmentation strategies are becoming a blended mix of physical isolation (usually for compliance and/or peace of mind o_O) with a virtualized overlay provided in the hypervisor and/or virtual appliances. Shifts in traffic patterns include a majority of machine-to-machine in east-west direction via intra-enclave “pods” are far more common. Dumping all flows through one (or a cluster) of firewalls at the “core” does what, exactly — besides adding latency and often times obscured or unnecessary inspection.

Add to this the complexity of certain verticals in the DC where extreme low-latency “firewalls” are needed with requirements at 5 microseconds or less. The sorts of things people care about enforcing from a policy perspective aren’t exactly “next generation.” Or, then again, how about DC firewalls that work at the mobile service provider eNodeB, mobile packet core or Gi with specific protocol requirements not generally found in the “Enterprise?”

In these scenarios, claims that a Campus & Branch NGFW is tuned to defend against “outside-in” application level attacks against workloads hosted in a Data Center is specious at best. Slapping a bunch of those Campus & Branch firewalls together in a chassis and calling it a Data Center NGFW invokes ROFLcoptr.

Show me how a forward-proxy optimized C&B NGFW deals with a DDoS attack (assuming the pipe isn’t flooded in the first place.) Show me how a forward-proxy optimized C&B NGFW deals with application level attacks manipulating business logic and webapp attack vectors across known-good or unknown inputs.

They don’t. So don’t believe the marketing.

I haven’t even mentioned the operational model and expertise deltas needed to manage the two. Or integration between physical and virtual zoning, or on/off-box automation and visibility to orchestration systems such that policies are more dynamic and “virtualization aware” in nature…

In my opinion, NGFW is being redefined by the addition of functionality that again differentiates C&B from DC based on use case. Here are JUST two of them:

C&B NGFW is becoming what I call C&B NGFW+, specifically the addition of advanced anti-malware (AAMW) capabilities at the edge to detect and prevent infection as part of the “inside-out” use case. This includes adjacent solutions that include other components and delivery models.

DC NGFW is becoming DC NGFW+, specifically the addition of (web) application security capabilities and DoS/DDoS capabilities to prevent (generally) externally-originated attacks against internally-hosted (web) applications. This, too, requires the collaboration of other solutions specifically designed to enable security in this use case.

There are hybrid models that often take BOTH solutions to adequately protect against client infection, distribution and exploitation in the C&B to prevent attacks against DC assets connected over the WAN or a VPN.

Pretending both use cases are the same is farcical.

It’s unlikely you’ll see a shift in analyst “Enchanted Dodecahedrons” relative to functionality/definition of NGFW because…strangely…people aren’t generally buying Campus and Branch NGFW for their datacenters because they’re trying to solve different problems. At different levels of scale and performance.

A Campus and Branch NGFW is “No Good For Workloads” in the Data Center.