I am trying to come up with some security best practice "maintenance and monitoring" procedures in light of 3rd party penetration testing report, and I was hoping for some expert pointers. Basically a 3rd party comes in once per year to do a full vulnerability assessment and penetration test of all our servers and devices attached to our network. Albeit I am not the most technical, I myself can see there are always common themes in their findings (i.e certain software/services missing patches, silly stuff like weak/default passwords, plain text services, systems not adhering to best practice hardening guides, devices that dont adhere to the golden build for the given system role, open shares on servers with sensitive data, stale AD accounts that havent been disabled etc etc etc.

My view on the pen testing/vulnerability assessment reports is the response to them seems to be pretty reactive, i.e. The internal IT team addresses the findings as and when they are made aware of them, aka security firefighting. My personal theory is that for every vulnerability found there is a root cause/lack of security procedure as to why this vulnerability was evident in the network. Rather than have the pen testers come in 12 months later and scan all the new servers and find the exact same types of vulnerability again and again and again – I feel strategically we should look how these issues came to be, and how we can prevent them from them occuring again – preventative measures, or failing that detective measures to find them ourselves on regular basis. Are there any useful best practice guides for internal security/IT teams that can prevent 99% of the security issues a 3rd party would find through effective security standards and maintenance/monitoring. Sort of a root cause for each type of vulnerability/security weakness, and some strategic best practices how to prevent them ever happening again, i.e. The security standards, and pro-active maintenance and monitoring tasks required to keep your network vulnerability free. I dont really know how to phrase what this kind of checklist would be so any guidance most welcome.

You are correct that there are usually root causes to any findings, which typically is due to poor patch management, failure to follow policies/standards/procedures during deployment, mismanagement of systems, or lack of training (to name a few).

In the perfect world, at a high level you would be able to classify all those under "failure to follow policy" since everything else is (or should be) driven by policy. However, once you start drilling down into standards and procedures, things get too diverse to put into a collective "best practice" document for the InfoSec community - for an individual organization, this is absolutely possible, but a comprehensive list is not realistic.

Is the vulnerability/pen test a fulfillment of a compliance requirement? As in, are current best practices for patch management, server hardening, and network hardening currently being followed and monitored? If the company is not doing the basics with security, then your scheduled pen test will always find flaws. Does any internal testing occur throughout the year? And is the 3rd party actually performing a true penetration test? Or are they hitting the magic button to auto-scan and just reporting back your vulnerabilities? A true test will show details as to where the vulnerability exists along with proof that it does truly exist. If they are not digging beyond the scanner then the results could be false positives.

Grendel wrote:You are correct that there are usually root causes to any findings, which typically is due to poor patch management, failure to follow policies/standards/procedures during deployment, mismanagement of systems, or lack of training (to name a few).

In the perfect world, at a high level you would be able to classify all those under "failure to follow policy" since everything else is (or should be) driven by policy. However, once you start drilling down into standards and procedures, things get too diverse to put into a collective "best practice" document for the InfoSec community - for an individual organization, this is absolutely possible, but a comprehensive list is not realistic.

Are there any useful documents and guides that detail pre-implementation best practices and post implemenation maintenance and monitoring tasks required to avoid the common vulnerabilities.

Probably the first place to start is ISO 17799. That will give you an industry-wide overview of what's expected within an organization with respect to all manners of security. After that, you can start drilling down towards specific policies, standards, and procedures (including verification of compliance).

Keep in mind, what we are discussing is within the domain of an ISSO or CISO. People (like me), have spent our careers developing our knowledge level to understand threats and risks within an organization so that we can cyclically evaluate and improve the security posture of an organization. There is a lot of information and learning that goes into this, and I think the ISO 17799 is a good place to start.

Hope that helps. I'm sure once you take a look at the ISO, you'll have a ton more questions. Fire away.

Oh, and ISO 17799 has had a number redesignation a while back. It's now called ISO 27002:2005, but I've been in the habit of calling it ISO 17799 for so long that I still call it that in conversations - I think most of us still call it that, unless we are writing documentation (at least within my circle).

Grendel wrote:You are correct that there are usually root causes to any findings, which typically is due to poor patch management, failure to follow policies/standards/procedures during deployment, mismanagement of systems, or lack of training (to name a few).

In the perfect world, at a high level you would be able to classify all those under "failure to follow policy" since everything else is (or should be) driven by policy. However, once you start drilling down into standards and procedures, things get too diverse to put into a collective "best practice" document for the InfoSec community - for an individual organization, this is absolutely possible, but a comprehensive list is not realistic.

Thanks for this, I wonder just to get some context you could provide some examples for each of the root causes you mention:

poor patch managementfailure to follow policies/standards/procedures during deploymentmismanagement of systems

I think as far as "poor patch management" goes, many companies still think if I set Windows Update to automatically update, then I will be compliant. This might work for very small environments, but not so much for larger. There should be written policies that state what is needed to be compliant - "You must have all critical and security patches installed within X number of days from release" The average is probably 30 days. The policy should be written so it takes a number of factors into consideration.

Actual risk rating - what a vendor labels a patch/update may not be what the organization labels it. This would depend on the amount of systems affected. Is the patch a fix for a vulnerability? Is the vulnerability exploitable? How likely is it that it can be easily exploitable? yada yada yada...

How long does it take to properly test certain patches? Yes, every good patch management process should including proper testing. Deploying an untested patch could be just as bad as not deploying it.

What does your particular flavor of compliance say? In most cases they say you must have a policy to govern this and they offer some guidelines. When you get audited, the auditor will examine the current policies and review environment to see if you are keeping to that policy. If you say we will patch critical systems within 15 days and an auditor comes in to review and finds all your major database servers 30 days out-of-date from the last patch releases, then they will mark it as a finding.

Basically if policy is written, then policy must be adhered too or it's a finding. In some cases the policy writer will make up something that may sound good on paper, but is horrible to implement. I will give you an example from a former job. The ISO stated that ALL systems needed to be compliant within 30 days of patch release. He did not specify any differences in servers, workstations, critical systems etc... I was the sole person responsible for meeting that demand with 100 servers, 300 workstations which included traveling laptops. The servers consisted of a number of dev boxes, web servers, an Exchange with a SAN, a Database server (which I later found out had just Windows 2003 and no SPs as well as 3 versions of MS SQL installed), and a couple non-windows systems. Oh by the way, the only "patch management system" in place was a poorly configured WSUS server. Oh and did I mention the only patching was just what Auto-update was saying was needed and then the previous person would just reboot and call it a day, this happened for over 3 years before I got in. But I digress, the environment must meet the policy.

Thanks for the pointers re patch management, I'd appreciate some input and ideally some practical examples of "failure to follow policies/standards/procedures during deployment" and "mismanagement of systems", and management controls around those.

User uses system for FTP server, even though it is intended for Web application only. Control: Monthly scanning, administrative action

This should NOT be considered appropriate actions for any / all organizations. Each org has a different culture, which may invalidate all these controls.

Thanks for this, is it part of your deployment policy to scan the server with a vulnerability scanner before its attached to the production network? Is there any useful resource that lists the full set of pre deployment checks/supervisor checks before a system would be approved to be added to the network? Is "server predeployment checklist" the best search term for Google.

Well give the search a try and see what comes up. Another good resource for some standards in server hardening is nist.gov. As far as whether or not to scan a system, basically whatever you have to do to ensure the system is ready for production. If hitting it with something like Nessus, GFI LANguard, or Rapid 7's Nexpose, then do it. When an official audit comes around, you will need to produce proof that controls are in place and working according to your policies.