Monday, October 1, 2018

The information security industry, lacking social inhibitions, generally rolls its eyes at anything remotely hinting to be a "silver bullet" for security. Despite that obvious hint, marketing teams remain undeterred at labeling their companies upcoming widget as the savior to the next security threat (or the last one - depending on what's in the news today).

I've joked in the past that the very concept of a silver bullet is patently wrong - as if silver would make a difference. No, the silver bullet must in fact be water. After all, chucking a bucket of water on a compromised server is guaranteed to stop the attacker dead in their tracks.

Bad jokes aside, the fundamental problem with InfoSec has less to do with the technology being proposed or deployed to prevent this or that class of threat, and more to do with the lack of buyers willing to change their broken security practices and compliment their new technology investment.

Too many security buyers are effectively looking for the diet pill solution. Rather than adjusting internal processes and dropping bad practices, there is eternal hope that the magical security solution will fix all ills and the business can continue to binge on deep-fried Mars bars and New York Cheesecakes.

As they say, "hope springs eternal".

Just as a medical doctor's first-line advice is to exercise more and eat healthily, our corresponding security advice is harden your systems and keep up to date with patching.

Expecting the next diet pill solution to cure all your security ills is ludicrous. Get the basics done right, and get them right all the time first, and expand from there.

Friday, September 21, 2018

So far this year I think I've attended 20+ security conferences around the world - speaking at many of them. Along the way I got to chat with hundreds of attendees and gather their thoughts on what they hoped to achieve or learn at each of these conferences.

In way too many cases I think the conference organizers have missed the mark.

I'd like to offer the following thoughts and feedback to the people organizing and facilitating these conferences (especially those catering to local security professionals):

Attendees have had enough of stunt hacking presentations. By all means, throw in one or two qualified speakers on some great stunt hack - but use them as sparingly as keynotes.

Highly specialized - border-line stunt hacking topics - disenfranchise many of the attendees. Sure, it's fun to have a deep-dive hacking session on voting machines, smart cars, etc. but when every session is focused on (what is essentially an) "edge" security device that most attendees will never be charged with attacking or defending... it's no longer overwhelming, it becomes noise that can't be applied in "real-life" for the majority of attendees.

As an industry we're desperately trying to engage those entering the job market and "sell" them on our security profession. Trinket displays of security (e.g. CTF, lock-picking) sound more interesting to people already in security... and much less so to those just entering the job market. Lets face it, no matter how much they enjoy picking locks, it's unlikely a qualification for first-line SOC analysts. Even for those that have been in the industry for a few years, these cliche trinket displays of security "skill" have become tired... and look like wannabe Def Cons.

Most attendees really want to LEARN something that they can APPLY to their job. They're looking for nuggets of smartness that can be used tomorrow in the execution of their job.

Here's a few thoughts for security (/hacker) conference organizers:

Have a track (or two) specifically focused on attack techniques (or defense techniques) where each presented session can clearly say what new skill or technique the attendee will have acquired as the leave the hallowed chamber of security knowledge goodness. This may be as simple as escalating existing skills e.g. "if you're a 5 on XSS today, by the end of the session you'll have reached a 7 in XSS against SAP installations", or "you'll learn how to use Jupyter Notebooks for managing threat hunt collaboration". The objective is simple: an attendee should be able to apply new skills and expertise tomorrow... at their day job.

Get more people presenting, and presenting for less time. Encourage a broader range of speakers to present on practical security topics. I think many attendees would love to see a "open mic" speaker track where security professionals (new and upcoming) can deep-dive present on interesting security topics and raise questions to attendees for help/guidance/answers. For example, the speaker has deep-dived into blocking spear-phishing emails using XYZ product but identified that certain types of email vectors evade it... they present proposals on improvement... and the attendees add their collective knowledge. It encourages interaction and (ideally) helps to solve real-world problems.

An iteration of the idea above, but focused on students, those job hunting for security roles, or on their first rung of the security ladder... a track where they can present on a vetted security topic where a panel of security veterans that evaluate the presentation - the content and the delivery - and provide rewards. In particular, I'd love to see (and ensure) that the presentation is recorded, and the presentation material is available for download (including maybe a backup whitepaper). Why? Because I'd encourage these speakers to reference and link to these resources (and conference awards) in their resumes/CV's so they can differentiate themselves in the hiring market.

Finally, I'd encourage (and offer myself up for participation) a track for practicing and refining interview techniques. It's daunting for all new starters in our industry to successfully navigate an interview with experienced and battle wary security professionals. It takes practice, guidance, and encouragement. In reality, starter interviewees have less than 15 minutes to establish their technical depth, learning capability, and group compatibility. On the flip-side, learning and practice sessions for technical security hiring managers on overcoming biases and encouraging diversity. We're an industry full of introverts and know-it-all's that genuinely want to help... but we all need a little help and coaching in this critical area.

Despite headlines now at least a couple years old, the InfoSec world is still (largely) playing lip-service to the lack of security talent and the growing skills gap.

The community is apt to quote and brandish the dire figures, but unless you're actually a hiring manager striving to fill low to mid-level security positions, you're not feeling the pain - in fact there's a high probability many see problem as a net positive in terms of their own employment potential and compensation.

I see today's Artificial Intelligence (AI) and the AI-based technologies that'll be commercialized over the next 2-3 years as exacerbating the problem - but also offering up a silver-lining.

I've been vocal for decades that much of the professional security industry is and should be methodology based. And, by being methodology based, be reliably repeatable; whether that be bug hunting, vulnerability assessment, threat hunting, or even incident response. If a reliable methodology exists, and the results can be consistently verified correct, then the process can be reliably automated. Nowadays, that automation lies firmly in the realm of AI - and the capabilities of these newly emerged AI security platforms are already reliably out-performing tier-one (e.g. 0-2 years experience) security professionals.

In some security professions (such as auditing & compliance, penetration testing, and threat hunting) AI-based systems are already capable of performing at tier-two (i.e. 2-8 years experience) levels for 80%+ of the daily tasks.

On one hand, these AI systems alleviate much of the problem related to shortage and global availability of security skills at the lower end of the security professional ladder. So perhaps the much touted and repeated shortage numbers don't matter - and extrapolation of current shortages in future open positions is overestimated.

However, if AI solutions consume the security roles and daily tasks equivalency of 8-year industry veterans, have we also created an insurmountable chasm for resent graduates and those who wish to transition and join the InfoSec professional ladder?

While AI is advancing the boundaries of defense and, frankly, an organizations ability to detect and mitigate threats has never been better (and will be even better tomorrow), there are still large swathes of the security landscape that AI has yet to solve. In fact many of these new swathes have only opened up to security professionals because AI has made them available.

What I see in our AI Security future is more of a symbiotic relationship.

AI's will continue to speed up the discovery and mitigation of threats, and get better and more accurate along the way. It is inevitable that tier-two security roles will succumb and eventually be replaced by AI. What will also happen is that security professional roles will change from the application of tools and techniques into business risk advisers and supervisors. Understanding the business, communicating with colleagues in other operational facets, and prioritizing risk response, are the intangibles that AI systems will struggle with.

In a symbiotic relationship, security professionals will guide and communicate these operations in terms of business needs and risk. Just as Internet search engines have replaced the voluminous Encyclopedia Britannica and Encarta, and the Dewey Decimal system, Security AI is evolving to answer any question a business may raise about defending their organization - assuming you ask the right question, and know how to interpret the answer.

With regards to the skills shortage of today - I truly believe that AI will be the vehicle to close that gap. But I also think we're in for a paradigm change in who we'll be welcoming in to our organizations and employing in the future because of it.

I think that the primary beneficiaries of these next generation AI-powered security professional roles will not be recent graduates. With a newly level playing field, I anticipate that more weathered and "life experienced" people will assume more of these roles.

For example, given the choice between a 19 year-old freshly minted graduate in computer science, versus a 47 year-old woman with 25 years of applied mechanical engineering experience in the "rust belt" of the US,... those life skills will inevitably be more applicable to making risk calls and communicating them to the business.

In some ways the silver-lining may be the middle-America that has suffered and languished as technology has moved on from coal mining and phone-book printing. It's quite probable that it will become the hot-spot for newly minted security professionals - leveraging their past (non security) professional experiences, along with decades of people or business management and communication skills - and closing the missing security skills gap using AI.

Tuesday, April 24, 2018

Ample evidence exists to underline that shortcomings in a third-parties cyber security posture can have an extremely negative effect on the security integrity of the businesses they connect or partner with. Consequently, there’s been a continuous and frustrated desire for a couple of decades for some kind of independent verification or scorecard mechanism that can help primary organizations validate and quantify the overall security posture of the businesses they must electronically engage with.

A couple decades ago organizations could host a small clickable logo on their websites – often depicting a tick or permutation of a “trusted” logo – that would display some independent validation certificate detailing their trustworthiness. Obviously, such a system was open to abuse. For the last 5 or so years, the trustworthiness verification process has migrated ownership from the third-party to a first-party responsibility.

Today, there are a growing number of brand-spanking-new start-ups adding to pool of slightly longer-in-the-tooth companies taking on the mission of independently scoring the security and cyber integrity of organizations doing business over the Web.

The general premise of these companies is that they’ll undertake a wide (and widening) range of passive and active probing techniques to map out a target organizations online assets, crawl associated sites and hidden crevasses (underground, over ground, wandering free… like the Wombles of Wimbledon?) to look for leaks and unintended disclosures, evaluate current security settings against recommended best practices, and even dig up social media dirt that could be useful to an attacker; all as contributors to a dynamic report and ultimate “scorecard” that is effectively sold to interested buyers or service subscribers.

I can appreciate the strong desire for first-party organizations to have this kind of scorecard on hand when making decisions on how best to trust a third-party supplier or partner, but I do question a number of aspects of the business model behind providing such security scorecards. And, as someone frequently asked by technology investors looking for guidance on the future of such business ventures, there are additional things to consider as well.

Are Cyber Scorecarding Services Worth it?
As I gather my thoughts on the business of cyber scorecarding and engage with the purveyors of such services again over the coming weeks (post RSA USA Conference), I’d offer up the following points as to why this technology may still have some business wrinkles and why I’m currently questioning the long-term value of the business model

1. Lack of scoring standards
There is no standard to the scorecards on offer. Every vendor is vying to make their scoring mechanism the future of the security scorecard business. As vendors add new data sources or encounter new third-party services and configurations that could influence a score, they’re effectively making things up as they go along. This isn’t necessarily a bad thing and ideally the scoring will stabilize over time at a per vendor level, but we’re still a long way away from having an international standard agreed to. Bear in mind, despite two decades of organizations such as OWASP, ISSA, SANS, etc., the industry doesn’t yet have an agreed mechanism of scoring the overall security of a single web application, let alone the combined Internet presence of a global online business.

2. Heightened Public Cloud Security
Third-party organizations that have moved to the public cloud and have enabled the bulk of the default security features that are freely available to them and are using the automated security alerting and management tools provided, are already very secure – much more so that their previous on-premise DIY efforts. As more organizations move to the public cloud, they all begin to have the same security features, so why would a third-party scorecard be necessary? We’re rapidly approaching a stage where just having an IP address in a major public cloud puts your organization ahead of the pack from a security perspective. Moreover, I anticipate that the default security of public cloud providers will continue to advance in ways that are not easily externally discernable (e.g. impossible travel protection against credential misuse) – and these kinds of ML/AI-led protection technologies may be more successful than the traditional network-based defense-in-depth strategies the industry has pursued for the last twenty-five years.

3. Score Representations
Not only is there no standard for scoring an organization’s security, it’s not clear what you’re supposed to do with the scores that are provided. This isn’t a problem unique to the scorecard industry – we’ve observed the phenomenon for CVSS scoring for 10+ years.
At what threshold should I be worried? Is a 7.3 acceptable, while a 7.6 means I must patch immediately? An organization with a score of 55 represents how much more of a risk to my business versus a vendor that scores 61?
The thresholds for action (or inaction) based upon a score are arbitrary and will be in conflict with each new advancement or input the scorecard provider includes as they evolve their service. Is the 88.8 of January the same as the 88.8 of May after the provider added new features that factored in CDN provider stability and Instagram crawling? Does this month’s score of 78.4 represent a newly introduced weakness in the organization’s security, or is the downgraded score an artifact of new insights that weren’t accounted for previously by the score provider?

4. Historical References and Breaches
Then there’s the question of how much of an organizations past should influence its future ability to conduct business more securely. If a business got hacked three years ago and the responsibly disclosed and managed their response – complete with reevaluating and improving their security, does another organization with the same current security configuration have a better score for not having disclosed a past breach?
Organizations get hacked all the time – it’s why modern security now works on the premise of “assume breach”. The remotely visible and attestable security of an organization provides no real insights in to whether they are currently hacked or have been recently breached.

5. Gaming of Scorecards
Gaming of the scorecard systems is trivial and difficult to defend against. If I know who my competitors are and which scorecard provider (or providers) my target customer is relying upon, I can adversely affect their scores. A few faked “breached password lists” posted to PasteBin and underground sites, a handful of spam and phishing emails sent, a new domain name registration and craftily constructed website, a few subtle contributions to IP blacklists, etc. and their score is affected.
I haven’t looked recently, but I wouldn’t be surprised if some blackhat entrepreneurs haven’t already launched such a service line. I’m sure it could pay quite well and requires little effort beyond the number of disinformation services that already exist underground. If scorecarding ever becomes valuable, so too will its deception.

6. Low Barrier to Market Entry
The barrier for entry in to the scorecarding industry is incredibly low. Armed with “proprietary” techniques and “specialist” data sources, anyone can get started in the business. If for some reason third-party scorecarding becomes popular and financially lucrative, then I anticipate that any of the popular managed security services providers (MSSP) or automated vulnerability (VA) assessment providers could launch their competitive service with as little as a month’s notice and only a couple of engineers.
At some point in the future, if there ever were to be standardization of scorecarding scores and evaluation criteria, that’s when the large MSSP’s and VA’s would likely add such a service. The problem for the all the new start-ups and longer-toothed start-ups is that these MSSP’s and VA’s would have no need to acquire the technology or clientele.

7. Defending a Score
Defending the integrity and righteousness of your independent scoring mechanism is difficult and expensive. Practically all the scorecard providers I’ve met like to explain their efficacy of operation as if it were a credit bureau’s Credit Score – as if that explains the ambiguities of how they score. I don’t know all the data sources and calculations that credit bureaus use in their credit rating systems, but I’m pretty sure they’re not port scanning websites, scraping IP blacklists, and enumerating service banners – and that the people being scored have as much control to modify the data that the scoring system relies upon.
My key point here though lies with the repercussions of getting the score wrong or providing a score that adversely affects an organization to conduct business online – regardless of the scores righteousness. The affected business will question and request the score provider to “fix their mistake” and to seek compensation for the damage incurred. In many ways it doesn’t matter whether the scorecard provider is right or wrong – costs are incurred defending each case (in energy expended, financial resources, lost time, and lost reputation). For cases that eventually make it to court, I think the “look at the financial credit bureau’s” defense will fall a little flat.

Final Thoughts
The industry strongly wants a scoring mechanism to help distinguish good from bad, and to help prioritize security responses at all levels. If only it were that simple, it would have been solved quite some time ago.

Organizations are still trying to make red/amber/green tagging work for threat severity, business risk, and response prioritization. Every security product tasked with uncovering or collating vulnerabilities, misconfigurations, aggregating logs and alerts, or monitoring for anomalies, is equally capable of (and likely is) producing their own scores.

Providing a score isn’t a problem in the security world, the problem lies in knowing how to respond to the score you’ve been presented with!

Thursday, March 8, 2018

Security Information and Event Management (SIEM) is feeling
its age. Harkening back to a time in which businesses were prepping for the
dreaded Y2K and where the cutting edge of security technology was bound to
DMZ’s, Bastion Hosts, and network vulnerability scanning – SIEM has been along
for the ride as both defenses and attacker have advanced over the intervening
years. Nowadays though it feels less of a ride with SIEM, and more like towing
an anchor.

Despite the deepening trench gauged by the SIEM anchor
slowing down threat response, most organizations persist in throwing more money
and resources at it. I’m not sure whether it’s because of a sunk cost fallacy or the lack
of a viable technological alternative, but they continue to diligently trudge
on with their SIEM – complaining with every step. I’ve yet to encounter an
organization that feels like their SIEM is anywhere close to scratching their
security itch.

The SIEM of Today

The SIEM of today hasn’t changed much over the last couple
of decades with its foundation being the real-time collection and normalization
of events from a broad scope of security event log sources and threat alerting
tools. The primary objective of which was to manage and overcome the cacophony
of alerts generated by the hundreds, thousands, or millions of sensors and logging
devices scattered throughout an enterprise network – automatically generating
higher fidelity alerts using a variety of analytical approaches – and
displaying a more manageable volume of information via dashboards and reports.

As the variety and scope of devices providing alerts and
logs continues to increase (often exponentially) consolidated SIEM reporting
has had to focus upon statistical analytics and trend displays to keep pace
with the streaming data – increasingly focused on the overall health of the
enterprise, rather than threat detection and event risk classification.

Whilst the collection of alerts and logs are conducted in
real-time, the ability to aggregate disparate intelligence and alerts to
identify attacks and breaches has fallen to offline historical analysis via
searches and queries – giving birth to the Threat Hunter occupation in recent
years.

Along the way, SIEM has become the beating heart of Security
Operations Centers (SOC) – particularly over the last decade – and it is often
difficult for organizations to disambiguate SIEM from SOC. Not unlike Frankenstein’s
monster, additional capabilities have been grafted to today’s operationalized
SIEM’s; advanced forensics and threat hunting capabilities now dovetail in to
SIEM’s event archive databases, a new generation of automation and
orchestration tools have instantiated playbooks that process aggregated logs,
and ticketing systems track responder’s efforts to resolve and mitigate
threats.

SIEM Weakness

There is however a fundamental weakness in SIEM and it has
become increasingly apparent over the last half-decade as more advanced threat
detection tools and methodologies have evolved; facilitated by the widespread
adoption of machine learning (ML) technologies and machine intelligence (MI).

Legacy threat detection systems such as firewalls, intrusion
detection systems (IDS), network anomaly detection systems, anti-virus agents,
network vulnerability scanners, etc. have traditionally had a high propensity
towards false positive and false negative detections. Compounding this, for
many decades (and still a large cause for concern today) these technologies
have been sold and marketed on their ability to alert in volume – i.e. an IDS
that can identify and alert upon 10,000 malicious activities is too often
positioned as “better” than one that only alerts upon 8,000 (regardless of
alert fidelity). Alert aggregation and normalization is of course the bread and
butter of SIEM.

In response, a newer generation of vendors have brought
forth new detection products that improve and replace most legacy alerting
technologies – focused upon not only finally resolving the false positive and
false negative alert problem, but to move beyond alerting and into mitigation –
using ML and MI to facilitate behavioral analytics, big data analytics, deep
learning, expert system recognition, and automated response orchestration.

The growing problem is that these new threat detection and
mitigation products don’t output alerts compatible with traditional SIEM
processing architectures. Instead, they provide output such as evidence
packages, logs of what was done to automatically mitigate or remediate a
detected threat, and talk in terms of statistical risk probabilities and
confidence values – having resolved a threat to a much higher fidelity than a
SIEM could. In turn, “integration” with SIEM is difficult and all too often
meaningless for these more advanced technologies.

A compounding failure with the new ML/MI powered threat
detection and mitigation technologies lies with the fact that they are
optimized for solving a particular class of threats – for example, insider
threats, host-based malicious software, web application attacks, etc. – and
have optimized their management and reporting facilities for that category.
Without a strong SIEM integration hook there is no single pane of glass for SOC
management; rather a half-dozen panes of glass, each with their own unique scoring
equations and operational nuances.

Next Generation SIEM

If traditional SIEM has failed and is becoming more of a
bugbear than ever, and the latest generation of ML and MI-based threat
detection and mitigation systems aren’t on a trajectory to coalesce by
themselves into a manageable enterprise suite (let alone a single pane of glass),
what does the next generation (i.e. NextGen) SIEM look like?

The NextGen SIEM lies in the natural evolution of today’s
best hybrid-SOC solutions. The Frankenstein add-ins and bolt-ons that have
extended the life of SIEM for a decade are the very fabric of what must ascend
and replace it.

For the NextGen SIEM, SOC-in-a-box, Cloud SOC, or whatever buzzword
the professional marketers eventually pronounce – to be successful, the core
tenets of operation will necessarily include:

Real-time
threat detection, classification, escalation, and response. Alerts, log
entries, threat intelligence, device telemetry, and indicators of compromise
(IOC), will be treated as evidence for ML-based classification engines that
automatically categorize and label their discoveries, and optimize responses to
both threats and system misconfigurations in real-time.

Automation
is the beating heart of SOC-in-a-box. With no signs of data volumes
falling, networks becoming less congested, or attackers slackening off, automation
is the key to scaling to the businesses needs. Every aspect of SOC must be
designed to be fully autonomous, self-learning, and elastic.

The
vocabulary of security will move from “alerted” to “responded”. Alerts are
merely one form of telemetry that, when combined with overlapping sources of
evidence, lay the foundation for action. Businesses need to know which threats
have been automatically responded to, and which are awaiting a remedy or
response.

The
tier-one human analyst role ceases to exist, and playbooks will be self-generated.
The process of removing false positives and gathering cohobating evidence
for true positive alerts can be done much more efficiently and reliably using
MI. In turn, threat responses by tier-two or tier-three analysts will be
learned by the system – automatically constructing and improving playbooks with
each repeated response.

Threats
will be represented and managed in terms of business risk. As alerts become
events, “criticality” will be influenced by age, duration, and threat level,
and will sit adjacent to “confidence” scores that take in to account the
reliability of sources. Device auto-classification and responder monitoring
will provide the framework for determining the relative value of business
assets, and consequently the foundation for risk-based prioritization and
management.

Threat
hunting will transition to evidence review and preservation. Threat hunting
grew from the failures of SIEM to correctly and automatically identify threats
in real-time. The methodologies and analysis playbooks used by threat hunters
will simply be part of what the MI-based system incorporates in real-time.
Threat hunting experts will in-turn focus on preservation of evidence in cases
where attribution and prosecution become probable or desirable.

Hybrid
networks become native. The business network – whether it exists in the
cloud, on premise, at the edge, or in the hands of employees and customers –
must be monitored, managed, and have threats responded to as a single entity.
Hybrid networks are the norm and attackers will continue to test and evolve
hybrid attacks to leverage any mitigation omission.

Luckily, the NextGen SIEM is closer than we think. As SOC
operations have increasingly adopted the cloud to leverage elastic compute and
storage capabilities, hard-learned lessons in automation and system reliability
from the growing DevOps movement have further defined the blueprint for
SOC-in-a-box. Meanwhile, the current generation of ML-based and MI-defined
threat detection products, combined with rapid evolution of intelligence
graphing platforms, have helped prove most of the remaining building blocks.

These are not wholly additions to SIEM, and SIEM isn’t the
skeleton of what will replace it.

The NextGen SIEM starts with the encapsulation of the best
and most advanced SOC capabilities of today, incorporates its own behavioral
and threat detection capabilities, and dynamically learns to defend the
organization – finally reporting on what it has successfully resolved or
mitigated.

Tuesday, March 6, 2018

Both new and returning attendees at technical security conferences are often puzzled by the presence of lock picking break-out areas and the gamut of hands-on tutorials. For an industry primarily focused on securing electronic packets of ones and zeros, an enthusiasm for manual manipulation of mechanical locks seems out of place to many.

Over the years, I’ve heard many reasons and justifications for the presence of lock picking villages, the hands-on training, and the multitude of booths selling the tools of the trade. The answers vary considerably and tend to be weighted by how much of a tinkerer or hacker the respondent thinks they are.

The reality – I think – can be boiled down to two primary reasons.

Like most longtime security professionals who now take to the stage to educate attendees on the fragility of the cyber-security domain, or attempt to mentor and guide the in-bound generation of attackers and defenders, locks and lock picking serve as a valuable teaching aid. As such, through our influence, we encourage people to tinker and learn.

By examining how mechanical locks operate and how they have evolved to counter each new picking technique used to subvert earlier models, cyber-security professionals begin to appreciate three fundamentals of security:

Attackers learn by dissecting and studying the intricacies of the defenses before them and must practice, practice, practice to defeat them.

Defenders must understand the tools and methodologies that the attackers avail themselves of if they are to devise and deploy better defenses, and

No matter how well thought-out in advance, the limitations of fabrication tolerances and the environments with which the security technology must operate within will introduce new flaws and vectors for attack.

These are incredibly important lessons that must be learned. Would-be professionals seeking to get into penetration testing, red teaming, or reverse engineering can’t just pick up the latest Hacking Exposed edition and complete online Q&A exams – they must roll-up their sleeves and accumulate the hours of hands-on experience of both failures and successes, and build that muscle-memory. Would-be defenders can’t just read the operations manuals of the devices they’ll be entrusted to protect, or sit through vendor training courses on how to operate threat detection systems – they must learn the tools of the attackers and (ideally) gain basic proficiency in their use if they’re to make valuable contributions to defense. Meanwhile, the third point is where both attackers and defender need to learn humility – no matter how well we think we know a system or how often we’ve practiced against a technology, subtle flaws and unexpected permutations may undermine our best efforts through no fault of our own skills.

As a teaching aid, locks and lock picking are a tactile means of understanding the foibles of cyber security.

But there is a second reason… because it’s exciting and fun!

Lock picking feeds into the historical counter-culture of hacking. There’s a kind of excitement learning how to defeat something near the edge of legitimacy – an illicit knowledge that for centuries has been the trade-craft of criminals.

With a few minutes of guidance and practice, the easiest locks begin to pop open and the hacker is drawn to the challenge of a harder lock, and so on. As frustrations grow, the reward of the final movement and pop of the lock is often as stimulating as scoring a goal in some kind of popular uniformed team sport.

The skills associated with mastering lock picking however have little translation to being a good hacker – except perhaps the single-minded intensity and tenaciousness to solve technical changes.
I have noticed that there are a disproportionate number of hackers who are both accomplished lock pickers, (semi) professional magicians, and wall-flower introverts. Arguably, locking picking (and magic tricks) may be the hackers best defense at uncomfortable social events. Rather than have an awkward conversation about sports or pop culture, it’s often time to whip out a lock and a pack of picks, and teach instead of prattle.

About Me

Hi, I'm Gunter Ollmann and I've been earning a living in IT (mostly in consulting) since the late 1980's. For the last decade or so I've been focused exclusively on Internet security - having built and led multiple professional hacking and security research organizations around the world.
I'm founder of Ablative Security Inc. and currently CTO for Security within the Cloud + Enterprise Security division at Microsoft - formerly CSO at Vecta AI, CTO at NCC Group, formerly CTO at IOActive, and former Chief Security Strategist at IBM Internet Security Systems. I tend to spend a lot of time investigating new threat vectors and cybercrime, taking a long-term strategic view of how Internet security is evolving, and helping define the protection technologies and services we'll need for the future.
You can also follow me on Twitter - http://twitter.com/gollmann Note that any comments and blog postings here on Blogger are my personal thoughts and opinions, and do not necessarily reflect those of my employer.