Back in the late '80s, I helped maintain the infamous Dirty Dozen malware list, which was created by Tom Neff and later updated by Eric Newhouse. The Dirty Dozen list originated because (cue the nostalgia) we had only a handful or two of malware programs to worry about. Neff's original list contained mostly Trojans, although early Apple viruses made it as well.

The number of malware programs quickly became multiple dozens, then exceeded 100. Neff and Newhouse gave up on maintaining the list because their hobby was taking up too much of their free time.

Today, I’m a big believer in each organization maintaining its own dirty dozen list, but instead of listing malware programs to be worried about, it should list the top dozen security events you look out for.

You say you're already on the lookout? Probably so: Most enterprises that use event log monitoring to keep an eye on their networks end up doing too much monitoring. For years, the Verizon Data Breach Report has told readers that most data breaches could have been caught by monitoring tools, if only the owners had been looking. It’s a problem of too much noise and not enough thoughtful planning.

The average enterprise generates literally millions to billions of events and collects them in a centralized repository. Companies even brag about how many petabytes of storage they've purchased to hold all their event log results. Although I’m smiling on the outside, I’m always mentally rolling my eyes on the inside. Tell me that you’re collecting billions of events and I know you're not doing a good job detecting evidence of malware or malicious hacker activity.

Talk about the the proverbial needle in a haystack. Most companies would be far better off defining a handful or two of events that clearly indicate malicious behavior and throwing away the rest. Actually, the best strategy is to let each endpoint device generate as many events as it likes -- but forward and alert on only a dozen nasty ones. That way, if you need the historic detail, you can go to the endpoint device and get all the minutiae you need. But don’t forward millions or billions of events to a database and hope you can create order from disorder (though some regulatory guidelines encourage this).

A better monitoring strategy

A far better strategy is to get the right people into a room for a day and decide on the dozen or so events that the company should be monitoring and alerting on. Instead of trying to define every event that might be malicious, focus on defining what would always indicate maliciousness -- like 50 million guesses against the domain admin password.

Malicious event log messages should be broken down into three major categories:

Single events that indicate maliciousness

Individual endpoint aggregate events that indicate maliciousness

Enterprise-wide aggregate events that indicate maliciousness

Start by defining which events -- even in a single instance -- would indicate maliciousness. A good example is someone trying to log on to a honeypot system. Once a honeypot system is fine-tuned to ignore all the normal logon events and touches that happen to any system on a network, it should forward any previously undocumented “touches” from any system. It’s a fake system, so nothing should be touching it, and if an item makes contact, it needs to be investigated.

Another one-time event example could be a member of the domain admins group logging onto a regular workstation. This should never happen. Domain admins should only be logging on to administrative workstations, domain controllers, or a limited set of other pre-authorized computers. You can tell computers to send alert messages when a highly privileged group member logs onto an unauthorized computer. It’s a great way to catch APT attackers.

Another one of my favorite single events involves installing an application control program, in audit mode only, on servers. Take a snapshot of what is supposed to be running on your server, being sure to create authorized installer rule exceptions, then alert on the unexpected. Servers shouldn’t be getting a whole lot of unexpected executables, and if they do, it needs to be investigated.

Next, define what events indicate maliciousness when they happen to a single object or endpoint. A good example would be a higher than normal number of failed logons against a particular domain admin account. Every week, most users are responsible for one or more failed logons (unfortunately, our passwords and PINs are getting longer and more complex). Every user, or computer, has a certain “rhythm” of bad logons. Maybe it’s two a day, 10 a day, or 50 a week. Regardless, the detection event should kick off when the number of failed logons exceeds the baseline normal by a great amount.

Say that a normal user has five bad logons a day, on average; the threshold alert event should be something like 50 or 100. Don’t set the detection threshold so small that you end up with lots of false positives. Remember, we are defining events that are truly malicious (or at the very least require investigation to rule out maliciousness). Another good example is higher than normal failed logons on perimeter routers.

Lastly, you must define which events collected across the enterprise indicate maliciousness. For this, you’ll definitely need to collect these events in a centralized repository (this shouldn’t take petabytes of storage for any enterprise). A good example is the number of failed logons, in aggregate, across the enterprise. Again, monitor and establish a normal baseline (for example, 5,000 bad logons a day) and set an alert when the threshold is exceed by a large amount (100,000 bad logons). Another good example is higher than normal number of malware detections across the enterprise. Inventory all the sources of event log messages -- firewalls, computer endpoints, antimalware logs, application logs, and so on -- then decide which events should be monitored and alerted on.

Don't try to detect everything that could be malicious. Break the habit! Define 12 events that are guaranteed to mean maliciousness and call it a day. If you can’t narrow it down to a dozen, try two or three dozen, max. Anything beyond that will probably result in more false positives and noise than good detection.

When in doubt, go smaller. I guarantee that you’ll do a far better job at detecting badness than most companies.