I'm attending the BlackHat this year, and one of the most interesting
and controversial talks so far was "SexyDefense - Maximizing the
home-field advantage" by Iftach Ian Amit.

Ian opened with some very good advice about the defensive mindset: there
is no final, optimal, best-practice security strategy. It's:

a) always evolving
b) specific to your organisation

Security compliance testing by itself does not improve organisational
security. It's what the organisation does after the compliance test or
penetration test that matters.

The theme of the conference as a whole this year seems to be that the
concept of a "perimeter defense" is dead. There will always be gaps and
breaches. We need to concentrate on detecting them as soon as possible
and responding the best way possible.

Our focus should be on finding the next gap in security instead of
looking for someone to blame for the previous gap.

Another useful piece of advice is to log everything everywhere, and
filter later. Storage is cheap, missing an early sign of attack is
expensive.

Some early warnings will be well outside any IDS system like the volume
of calls to support, unusual sales enquiries or odd PC behavior reported
by your staff.

One example was taking the DarkComet tool, infecting it with itself, and
uploading it back to a popular "toolz" website. Everyone who downloaded
that version of the tool, was 0wned. Another example was modifying a
dodgy packer to leave a distinct signature.

The only caveat was: get legal advice appropriate to your country before
attempting that at home.

I have many more issues with this. I've tried to discuss them with Ian
after the talk, but his approach is "we work in a tainted space, it's
naive to think we can do that wearing white gloves".

But let's consider the classic principle of anti-virus companies: "Don't
modify malware, to do that is as bad as creating your own new malware".
Is it really naive? We follow this principle even internally, never mind
uploading this modified malware anywhere else.

Before we consider moral issues, let's consider usefullness of this
approach.

Why would someone download a new version of your malware? I'd have
thought that you would need to provide some useful new functionality.
Ian reassures me that's not the case. Most bad guys would just grab the
latest version even if there isn't anything new in it.

This doesn't eliminate the really clever adversaries. They build their
own tools or are not willing to trust random code. Yet we can get the
script kiddies while still wearing white gloves.

So, you've got some percentage of low-skill hackers who will use your
modified tools. You're safe from these attacks. What about all the other
attacks:

1. High-skill hackers who will use other tools against you
2. Low-skill hackers who get their tools elsewhere

If you play that game, how long before you actually write some new
attack capabilities into your malware tools, to increase their adoption
or to raise your street cred in the group you are infiltrating?

This slope is much more slippery than a simple "don't modify even one
byte of malware" rule.

All
of the above assumes that the modification went to plan, and you've done
exactly what you wanted to do to this malware. As a developer, I can
tell you it's not a good assumption to make with any piece of software,
and I don't see why malware would be different.

Do you really want a new virus in the wild on your conscience? Even if
your tools can detect it, what about everybody else's tools?

So now we are back to the moral side of the story. Going back to our
comparison to the physical world, this talk seems to suggest we make
some guns with a known ballistic signature and give them to criminals.

In the words of multiple James Bond villains: what could possibly go
wrong?