Sunday, July 31, 2011

This is my last blog post –for the foreseeable future. It is dated 7/31/2011 at 11:59PM. What happens tomorrow? A new life, of course!

As only very few of you know, I have accepted a position of Research Director with Gartner, Inc. Tomorrow I am joining a stellar team lead by Phil Schacter, formerly from Burton Group.

I spent two VERY successful years consulting, working with companies like Novell, RSA, LogLogic, NitroSecurity, eGestalt, ObserveIT, Tripwire, AlienVault, “Big MSSP”, “Big Insurance Company”, “SaaS Log Management Company”, “IT Management Software Company”, “SMB Security Company”, “Big Networking Equipment Company” and others. I defined, built, deployed, and marketed security products, mostly in the area of SIEM and log management. I helped organizations with security and PCI DSS strategy. I advised security vendor management on compliance strategy for their products. I have spoken at clients’ events and have written more whitepapers than I care to admit… as well as did a lot of other fun things!

It was fun and I loved it - and as my clients can attest, I was good at it. Also, I was more busy than I thought I’d be, and occasionally more than I wanted to be. However, at some point I started to feel that I need another step up. And so I am making that step now!

In all likelihood, I will eventually resurface at Gartner blogs – please look for me there. And finally, those who love my personal blogging (all 4007 of you as of today), don’t despair – I will still occasionally blog here on non-infosec subjects: think good books, laser weapons, hypnosis, skiing, travel and my other weird hobbies

Finally, I want to give very special thanks to Lee Kushner for his super-valuable career counseling that helped me make this difficult career choice.

Executive summary: you need to procure services when you buy a SIEM tool, if you don’t – you’d be sorry later.

Even if you are amazingly intelligent and have extensive SIEM experience – see above. Even if you saw a successful SIEM project that didn’t include vendor or 3rd party services with your very eyes – see above. Even if your SIEM vendor tells you “you don’t need services” – see above. See above! See above!! See above!!!

Let’s analyze this “SIEM services paradox.” A lot of organizations – way too many, in fact – balk at the need to procure related services before, during and after their SIEM purchase. The thinking often goes like this: we need a SIEM and this box <points at the appliance still in the box> is a SIEM. That’s all we need. What services? Why services? Huh?

In reality - and this is what I sometimes call “secret to SIEM magic” – that box is not a SIEM. That box, when racked and connected to your network, is STILL not quite a SIEM. Only when you “operationalize” it (see picture), then you can say that you have Security Information and Event Management (SIEM) capability in your organization and that you do “real-time” security monitoring.

Now, be honest, do you know how to deploy a SIEM tool and then figure out the shortest path to its operational success? Probably not… thus services/consultants who will work WITH you to make it a reality by arriving at the best possible way of benefitting from SIEM in your environment. Which use case give you the best bang for the SIEM buck? Which one will show a “quick win” to your management? Which one is more likely to detect an attacker in your network?

When a SIEM vendor tries to sell you services, it is NOT vendor greed – but simply common sense. And if you say “no”, it is not “saving money” – but being stupid. SIEM success out-of-the-box (while real, in some cases!) is a pale shadow of what a well-thought through deployment looks like! My [broken] analogy is: you buying a nice shiny Aston Martin and then only using it to commute to a train station 1 mile from home. Will it work? Yes. Is this a good investment and a good experience? Hell no!

And, no, outsiders alone cannot do it. You will need to help them help you.

This also leads to the rise of managed or co-managed SIEM options (which are NOT MSSPs, BTW!) as more people realize that a) they need a SIEM and b) they cannot handle a SIEM. Future cloud SIEM will (when it emerges) try to tackle the same problem of being simpler to operate and thus simpler to operationalize.

Today, most SIEM vendors offer an extensive menu of services to go with a product, and there are also some smart third parties. Many services around SIEM can be organized as follows.

Pre-sale services examples:

Product selection help

Vendor differentiation analysis and shortlist definition

Regulation analysis and business cases review

Product strengths/weaknesses analysis

Product fit for type of project

Product fit for vertical / business type

RFP definition assistance

Current tools vs requirements gap analysis

Services offered during SIEM acquisition and deployment:

SIEM implementation

SIEM project planning

Proof-of-concept deployment management

Product testing in production environment

Data source integration and collection architecture

Default contents tuning

Post-sale, operational services:

SIEM analyst training

Performance tuning and capacity planning

SIEM project management

Custom content creation

Custom device integration

SOC building

Vendors and consulting firms offer other types of services as well all the way up to “co-managed SIEM” where a 3rd party firm manages your SIEM deployment for you. Will future SIEM work better out of the box? Yes, I think so. Will SIEM ever be as simple as a firewall? No, never: it is inherent complexity of security monitoring that cannot be squeezed out even by creative engineering…

Friday, July 29, 2011

Imagine you own a broken, dilapidated, failing SIEMcrap deployment. What? Really… that, like, never happens, dude! SIEM is what makes unicorns shine and be happy all the time, right?

Well…mmm… no comment. In this post, I want to address one common #FAIL scenario: a SIEM that is failing because it was deployed with a goal of real-time security monitoring, all the while the company was nowhere near ready (not mature enough) to have any monitoring process and operations (criteria for it). On my log/SIEM maturity scale (presented here, also see this related post from Raffy), they are either in the ignorance phase or maybe log collection phase.

And herein lies the problem: if you deployed one of the legacy, born in the 1990s SIEMs that are not based on a solid log management platform, the tool will actually suck at the very fundamental level: log collection. The specific issue here is that most of these early tools were designed to only selectively collect what was deemed necessary for real-time security monitoring (vs all log data). In essence, you have a tool with monitoring features (that you don’t use) and with weak collection features (that you can use, but they are weak).

What to do? You have these options:

Leave it to rot; you can always keep it just to boast to your friends (and PCI QSAs) that “ye own one of ‘em olde SIEMs”

Blow it away and join the “SIEM doesn’t work” crowd – and maybe buy a simplelog management tool later

Deploy a log management tool to “undergird” your crappy SIEM; you have a choice of buying from the same SIEM vendor (if they have it) or a different vendor

Built your own log management layer on syslog and open source tools

I have seen people take either of the above four. Personally, I have seen much more success with the option #3 (buy log management) and not infrequently with #4 (built log management) – BTW, this deck might help you choose. You want to move your SIEM setup from “get some logs – ignore all logs” model to “collect all/more logs – review some logs” which is typically much more aligned with your level of maturity. And then grow and solve more problems with your SIEM and demonstrate “quick wins.” While you are at it, review some architecture choices discussed here.

Thursday, July 28, 2011

As I am going through my backlog of topics I wanted to blog about (but didn’t have time for the last 4-6 months), this is the one I really wanted to explore. Here is the scenario:

Something blows up, hits the fan, starts to smell bad, <insert your favorite incident metaphor> … either in your IT environment or at one of your clients’

Logs (mostly) and other evidence is taken from all the components of the affected system and packaged for offline analysis

You get a nice 10MB-10GB pile of juicy log data – and they wants “answers”

What do you do FIRST? With what tools?

Let’s explore this situation. I know most of you would say “just pile’em into splunk” and, of course, I will do that. However, that is not a full story. As I point out in this 2007 blog post (“Do You Enjoy Searching?”), to succeed with search you need to know what to search for. At this point of our incident investigation, we actually don’t! Meanwhile, the volume of log data beyond a few megabytes makes “trial and error” approach of searching for common clues fairly ineffective.

If you received any hints with the log pile (“I think the user ‘jsmith’ did it” or “it seems like 10.1.3.2 IP was involved”), then you can search for this (and then branch out to co-occurring and related issues and drill-down as needed), but then your investigation will suffer from “tunnel vision” of only seeing this initially reported issue and that is, obviously, a bad idea.

Let’s take a step back and think: what do we want here? what is our problem? We want a way to exploreALL the logs in a pile, across log types, across devices, across all time AND then also following a timeline of events. In other words, we ain’t in “searchland” here, buddy…

If you have an enterprise SIEM sitting around (and one with well-engineered support for diverse historical log imports – which is NOT a certainty, BTW), you should definitely load the logs there as well. I like this approach since you can then run cross-device summary reports over the entire set, slice the set in various ways (type of log source, log source instance, type of log entry – categorized, time period filter, time trend, etc) and data visualization tools (treemaps, trend lines, link maps, and other advanced visuals on parsed, normalized and categorized) help get a big picture view of our pile.

Looking at the open source log tools, does anything look promising for the task? OSSIM can do the trick (even though their historical log import is not my favorite), but nothing else does. In some cases, I used sawmill (free trial) for my “big picture first look”, but it is not cross-device and only shows reports for each log type individually. If I were feeling really adventurous (and was on hourly billing), I could actually send all the logs via a syslog streamer into OSSEC (in order to see the log entries the tool will flag as interesting/alertable), but this is not really something I’d enjoy doing. I am almost tempted to say that you can use something like afterglow, but it relies on parsed data that you’d sill need to cook somehow (such as again using a SIEM). Log2timeline is useful, but only for one dimension – and the one that splunk actually addresses pretty well already.

To generalize, you need (a) a search tool and (b) an exploration tool. The search tool should help you quickly answer SPECIFIC questions. The exploration tool should use data to generate “hints” on WHAT questions you should start asking…

Wednesday, July 27, 2011

OK, this WILL be taken the wrong way! I spent years whining about how use cases and your requirements should be THE MAIN thing driving your SIEM purchase. And suddenly Anton shows up with a simple ‘Top 10 list’, so…. blame it on that cognac.

This list is AN EXAMPLE. SAMPLE. ILLUSTRATION. It is here FOR FUN. If you use it to buy a SIEM for your organization, your girlfriend will sleep with your plumber. All sorts of bad things can and likely will happen to you and/or your dog – and even your pet squirrel might go nuts. Please look up the word “EXAMPLE” in the dictionary before proceeding!

On top of this, this list was built with some underlying assumptions which I am not at liberty to disclose. Think large, maybe think SOC, think complex environment, etc. Obviously, an environment with its own peculiarities … just like yours.

With that out of the way, Top 10 Criteria for Choosing a SIEM … EXAMPLE!

1. User interface and analyst experience in general: ease of performing common tasks, streamlined workflows, small number of clicks to critical functions and efficient and quick information lookups (including external information) when needed during the investigation

3. Log source coverage: full integration of most (better: all) needed log sources before operational deployment, detailed parsing and normalization of all fields needed for the analysts’ work; coverage of device, OS and application logs; wide use of real-time log collection methods, even at a cost of using agents

5. Reporting: report performance, visual clarity, ease of modification and default/canned report content, ability to create custom reports on all data in a flexible manner without knowing the SIEM product internal structures and other esoterica

6. Search and query: high (seconds) performance of searches and queries when investigating an incident, access to raw log data via an efficient search command, tied to the main interface

7. Escalation, shift and analyst collaboration support: a system to manage collaborative investigations of security issues, take notes, add details and review/approve the workflow; likely this requires an advanced case management / ticketing system to be built in

8. Ability to gradually expand storage on demand when the environment is growing; this applies to both parsed/normalized data storage as well as raw log storage

9. Complete log categorization and normalization for cross-device correlation that enables the analysts to “cross-train” and not “device-train” before using the SIEM well.

10. New log source integration technology and process: ability to either quickly integrate new log sources or have vendor do it promptly (days to few weeks) upon request

Tuesday, July 26, 2011

A lot of good work on logging standards as well as standards for the “surrounding areas” (correlation rules, parsing rules, etc) will happen at this first-ever NIST workshop on EMAP.

Please mark your calendars to save the date for an EMAP Developer Workshop to be held August 29-30, 2011 at the NIST Campus in Gaithersburg, Maryland. We are still formalizing the agenda, but topics to be covered will include:

· Discussion of target use cases and requirements as identified by EMAP working group.

· CEE Overview and in-depth discussion of current issues.

· Discussion of EMAP component specifications and issues/questions for the community.

· Discussion of EMAP roadmap and connections with other efforts within security automation.

We are in the process of standing up a registration page and creating the agenda. A teleconference line will be provided for those who cannot attend in person. More details to come in the near future, we hope to see you there.

If you are dealing with logs and SIEM (such as building, or even using the tools) and care about standards, please consider attending – but only if you will contribute!

Monday, July 25, 2011

“Implementing SIEM sounds straightforward, but reality sometimes begs to differ. In this session, Dr. Anton Chuvakin will share the five best and worst practices for implementing SIEM as part of security monitoring and intelligence. Understanding how to avoid pitfalls and create a successful SIEM implementation will help maximize security and compliance value, and avoid costly obstacles, inefficiencies, and risks.”

As I was drinking cognac on the upper deck of a 747, flying TPE-SFO back from a client meeting, the following idea crossed my mind: CAN one REALLY do a decent job with log management (including log review) if their budget is $0 AND their “time budget” is 1 hour/week? I got asked that when I was teaching my SANS SEC434 class a few months ago and the idea stuck in my head – and now cognac, courtesy of China Airlines, helped stimulate it into a full blog post.

So, $0 budget points to using open-source, free tools (duh!), but 1hr/week points in exactly the opposite direction: commercial or even outsourced model.

The only slightly plausible way it that I came up with is:

Spend your 1st hour building a syslog server; it can be done, especially if starting from a old Linux box that you found in the basement (at $0); don’t forget logrotate or equivalent

Spend a few next weeks (i.e. hours) configuring various Unix, Linux and network devices (essentially, all syslog log sources) to log to it

Consider deploying Snare on a few Windows boxes (if needed); it would likely be easier to do than doing remote pull – too much tuning might be needed

Monday, July 18, 2011

The Director, Product Marketing is responsible for developing, planning and executing externally-focused product marketing strategies, plans & programs for the industry leading NitroView SIEM, log management, database monitoring, application monitoring and IDS/IPS solution. They will research & understand security market trends by working with industry analysts and engaging prospects & customers, closely monitor & analyze competitor offerings and develop value propositions, product positioning and messages for enterprise and government markets worldwide. They will drive and lead all new product launch and introduction activities, and support on-going product and solution campaigns and programs.

Candidates in metro Boston, metro Washington DC or open to virtual, home office arrangements are welcomed to apply to jobs@nitrosecurity.com.

Responsibilities:

a. Work closely with Product Management, Engineering and Operations to fully understand current and planned technologies, products and solutions

Monday, July 04, 2011

Just a quick announcements about my “PCI in the cloud” class that I am teaching this week. The location has been finalized:Location (map):Ariba Silicon Valley Office
Sequoia Conference Room910 Hermosa Court,
Sunnyvale, CA
(please use the main entrance and tell receptionist that you are there for CSA PCI class, lunch and coffee will be provided)Date: Friday July 8, 2011 at 9AM
There are still, I think, 2-3 seats left at $20/seat (beta price! must provide class feedback!!), so go and register here.

UPDATE: 7/4/2011 5:50PM Sorry, sold out! I will check with the host tomorrow about the room size and there is a slight chance that we can fit more than 25 people, but it is not a certainty.

Friday, July 01, 2011

Blogs are "stateless" and people often pay attention only to what they see today. Thus a lot of useful security reading material gets lost. These monthly round-ups is my way of reminding people about interesting and useful blog content. If you are “too busy to read the blogs,” at least read these.

About Me

He is a recognized security expert in the field of log management and PCI DSS compliance. He is an author of books "Security Warrior", "PCI Compliance", "Logging and Log Management" and a contributor to "Know Your Enemy II", "Information Security Management Handbook" and others. Anton has published dozens of papers on log management, correlation, data analysis, PCI DSS, security management, honeypots, etc . His blog securitywarrior.org was one of the most popular in the industry.

In addition, Anton teaches classes (including his own SANS class on log management) and presents at many security conferences across the world; he recently addressed audiences in United States, UK, Singapore, Spain, Russia and other countries. He worked on emerging security standards and served on the advisory boards of several security start-ups.

Before joining Gartner in 2011, Anton was running his own security consulting practice www.securitywarriorconsulting.com, focusing on logging and PCI DSS compliance for security vendors and Fortune 500 organizations. Dr. Anton Chuvakin was formerly a Director of PCI Compliance Solutions at Qualys. Previously, Anton worked at LogLogic as a Chief Logging Evangelist, tasked with educating the world about the importance of logging for security, compliance and operations. Before LogLogic, Anton was employed by a security vendor in a strategic product management role. Anton earned his Ph.D. degree from Stony Brook University.