Think about your bank. The building was specifically designed with security in mind, with guards, vaults, safety deposit boxes, and locks keeping the bank’s goods safe from harm. Though all of these elements must be in place, they should be consistently evaluated and tested to ensure they still successfully keep malicious actors out.

The same principle applies to any security program. While you can define a defense in depth model and use people, processes, and technology to manage the tools, it’s important to not overlook safety checks. Protecting yourself from threats requires consistently asking yourself whether your security program is working as designed.

What are pen tests, and how are they different from Red Team exercises?

A pen test is an authorized attack on an organization’s, technology, people, or facilities designed to evaluate the specific target’s security controls. These are typically focused on one specific environment or system, such as an internal network or web application.

Penetration tests involve assessing the target using the tools and technique of an attacker. Once identified, those vulnerabilities are then exploited to determine what type of access an attacker can gain. The goal is often to find and chain as many vulnerabilities as possible within the scope of the test to determine the risk of each weakness.

Red Team exercises* are often confused with pen tests, but they are not the same thing. Red teaming is designed to improve the readiness of an organization by preparing the staff of the targeted organization for the eventuality of an attack and ensure the detection and response capabilities of an organization are adequate. The Red Team engages in a full-scope attack simulation, meaning that all techniques available are able to be used, with the goal of testing organizational security rather than that of an individual system or application. These engagements are goal-oriented and are conducted in a way that simulates the target organization’s most likely threat actor. The goal of these simulations is to discover shortcomings on policies, procedures, and technologies in order to improve the security of the entire organization.

*It is important to note that Red Team exercises are not ideal for every company, especially for organizations with a lower security maturity. Rapid7’s Advisory Services team can help assess and determine your company’s maturity level and help develop a plan to increase the overall effectiveness of your company’s security plan.

Are you looking for just a network-based pen test that searches for OS and application-based vulnerabilities?

Do you want to evaluate your perimeter defenses (external pen test) or look at your internal defenses based on a presumed compromise (internal network pen test)?

Do you want to test employee security awareness through social engineering?

Do you want the pen test team to target a particular section of your network?

Do you need the team to exclude any systems from their tests?

Does your organization have many web-based services or applications that could benefit from a web app pen test?

Other considerations

So, let’s assume you’ve already decided to start performing pen tests as part of your organization’s security program and have defined clear goals. Here are some other areas to consider and how the additional sub-controls of Control 20 address these:

A pen test may require the use of a domain account or accounts that are used to perform some authenticated parts of the testing, such as in the case of a web application. This is normal practice. You’ll want to make sure accounts are removed or disabled once testing is over—or, at the very least, ensure any activity on those accounts is isolated to the testing windows.

Define the objective of the engagement. For example, consider having the tester look for sensitive internal information or personally identifiable information (PII), attempt to gain access to a specific system, or gain physical access to a sensitive office.

Leverage the results of the pen test in conjunction with vulnerability assessment results to determine how your vulnerability management program is working. Was a pen tester able to exploit a vulnerability you thought was already patched? There are many ways a pen test can help identify other programmatic deficiencies in your security program.

The final word

Many organizations fail to perform pen tests for many reasons, mainly out of fear of what the pen test will find and what shortcomings of the security program it will identify. However, it’s important to recognize that no security program is perfect and you are not alone in your concerns. It’s better for you to know and verify your weaknesses than to discover you were breached through an unpatched vulnerability that went unnoticed.

Rapid7 recommends that most organizations perform a pen test annually. Reach out and ask how we can help.

In today’s ever-evolving attack landscape, it’s not a matter of if, but when, a security incident will occur in your organization. If your company suffers a data breach or a phishing, ransomware, or DDoS attack, are you prepared to respond?

The key principle of CIS Critical Security Control 19 is to protect the organization’s information—and reputation—by developing and implementing an incident response infrastructure for quickly discovering an attack and effectively containing the damage, eradicating the attacker’s presence, and restoring the integrity of network and systems.

Your incident resposne plan serves as the foundation of your incident response strategy, so if you don't have one already, it's a good idea to start developing one now. If you do have one, consider reviewing and updating it.

[eBook] Prepare for Battle: Building an Incident Response Plan

Your incident response plan should include a definition of personnel roles, assigned job titles, and duties for handling computer and network incidents in your organization. Specific employees should be assigned to each role and their contact information included in the plan. Each of these roles requires specific training to ensure employees are clear on the role they play in an investigation when the plan is activated.

Key personnel to bring in are those who maintain skill sets that allow for sufficient access to various data types. This includes the people managing your security and application log information, support personnel, system administrators, and network engineers.

In addition to the IT and security teams, bring in legal, public relations, communications, senior leadership, human resources, supply management, and vendors.

Testing your incident response plan

Testing is vital to ensure your incident response plan will be effective during an actual incident and that employees understand current threats, risks, and their responsibilities in supporting the incident handling team. Incident response tabletop exercises, which are discussion-based exercises in which personnel meet to discuss roles, responsibilities, coordination, and decision-making in a given scenario, are recommended, along with functional/simulation exercises. Security awareness training should also be included as part of both initial and ongoing training. The development of incident response testing plans and threat simulations can be internal or outsourced to a third party.

Incident response awareness

Both your employees and the public should be aware of how to report an incident to your organization’s security group. Develop standards for the time required to report anomalous events and security incidents, the mechanisms for such reporting, and the types of information that should be included in the incident notification. This reporting should also include notifying the appropriate community emergency response team in accordance with all legal or regulatory requirements.

Publish information regarding reporting computer anomalies and incidents to the incident handling team for all personnel, including employees and contractors. Such information should also be included in routine employee awareness activities.

Assemble and maintain information on third-party contact information that should be used to report a security incident in your organization. For example, you could maintain an email address or web page specifically designated for security incidents.

Further Considerations

Though this guidance is not specifically spelled out in the sub-controls of Critical Security Control 19, we often recommend the following to our clients:

Incident response retainer: Incident response retainers offer customers the ability to rapidly engage skilled personnel to perform a forensic investigation in the event of a suspected compromise or the real deal. These retainers are often an annual expense you either use or lose, so ensure your pick allows you to move from being reactive to proactive and reallocate your hours toward penetration testing or tabletop exercises, like we do here at Rapid7.

Cyber-insurance: For many companies, cyber-insurance is a “check the box” control. When purchasing cyber-insurance, it is important to understand what is and isn’t covered as part of your plan. For example, many insurance policies will be nullified if you are not properly managing your logging infrastructure. It is also important to realize that cyber-insurance is not a replacement for implementing security controls. Do your research before purchasing a cyber-insurance policy and be sure your legal team weighs in, too.

To learn how Rapid7 can help improve your detection capabilities and incident response program—or take care of the whole thing for you—explore our SIEM solution, which allows you to detect and respond to attacks in-house. We also have incident detection and response services if you would like to consult with our team of highly trained professionals.

Ready to start detecting and responding to attacks with confidence? Try InsightIDR.

It can be easy to feel overwhelmed by Critical Control 18 due to the sheer size of the concept. Hear “application software security,” and your mind starts to conjure up images of waterfalls, SCRUM masters leading the party into the next dungeon meeting, and a security guy with an expression that’s usually followed by the word, “No.”

Though it can be tough to disentangle yourself from the negative imagery that accompanies application software security, give it a try. When upheld, this control can be the genie that makes your SDLC wishes and SecOps dreams come true.

The list could cause anyone to let out a nervous laugh or even a sigh, but the truth is that most of my clients (even the ones that are lower on the maturity scale) have at least some components of the control already in place.

These sub-controls include the following:

Ensure all versions of third-party software are still being supported by the vendor, and either update or retire versions that are not.

Deploy a web application firewall (WAF) to inspect traffic for known web application vulnerabilities, such as SQL injections (SQLi), cross-site scripting (XSS) or request forgery, etc. For applications that are not web-based, ensure there is an application firewall in place that can decrypt traffic prior to analysis. If the traffic is encrypted, the device should either sit behind the encryption or be able to decrypt traffic. If neither seems to be appropriate in your environment, consider a host-based web application firewall.

Perform explicit error checking for internally developed applications, and document all input, including size, data type, and acceptable ranges or formats.

Use a web application scanner to test software developed internally or by a third party prior to deploying to production, after any major changes, and on a regular basis (at least once a month is preferred).

Ensure internally developed applications have all artifacts (such as test data and scripts, tools, and debugged code) removed prior to deploying to production or are accessible in the production environment.

Now that you have an idea of which components you should have in place, let’s take a look at the process of implementing these controls, keeping in mind that it will look very similar across companies of various sizes and industries.

Step 1: Foster a Relationship with Your Application Development and Procurement Groups

A lot of problems I see when going out to a client site stem from security trying to operate in a vacuum and/or trying to control what other groups within IT do. Neither of those things are beneficial for a robust security organization, nor are they sustainable in the long run if you want buy-in from management or other teams.

At the very least, security should aim to foster communication with teams such as development, change management, and project management—though the best-case scenario would be establishing an ongoing relationship with them. Rather than dictating the rules to other teams, security practitioners should help enable them to do their jobs in as secure a fashion as possible without negatively affecting the business.

Cool story, bro, but how do we do this?

Before taking the list of controls and hitting other teams upside the head with it, have a series of meetings with stakeholders. The objective of these meetings should be to obtain a clear understanding of how the SDLC currently works, any pain points that could be addressed by the security team, and regulatory and compliance requirements that are already being addressed (spoiler alert, there’s overlap between the controls outlined by the CIS and software development standards outlined by compliance requirements such as in the Payment Card Industry Data Security Standard (PCI DSS).

Once you have the meetings and a solid understanding of the current state of things, you can move on to the next phase.

Step 2: Implement Security Gates to Address the Controls Instead of Having a Pass/Fail at the End of the Software Development Lifecycle

Something I have come across quite often when working with companies that have a seemingly solid SDLC is that they wait to do all of their security checks at the end. However, by incorporating several control gates—places between SDLC phases that are held for security—organizations can save a lot of time, money, and rework.

For example, security practitioners can work with the business in the design and requirements-gathering phase to determine whether there is a security requirement in the first place, then follow it up with which control gates should be incorporated and at what stage of the project.

Let’s say that an organization is making a minor change to an existing piece of software that was developed in-house. Does it make sense to perform a full-blown penetration test or have security sign off on it prior to release? Maybe not. But what if you’re migrating from an outdated version of your database in order to be in line with the first control? All of a sudden, you need to:

Talk about what industry-standard best practices are for hardening the new version of the database (sub-control 7)

Ensure developers are trained (sub-control 8) to perform the explicit error checking and sanitation (sub-controls 3 and 5) during the development and QA phases

Perform a scan of the application using tools you have in place for web- or non-web-facing applications (sub-control 4)

Ensure pre-production data isn’t being migrated into production (sub-control 9), since you’re using environments that are at least logically separated (sub-control 6)

Step 3: Ensure You Have the Proper People and Tools in Place

I put this step at the end, although in all honesty, it should be an ongoing part of having an internal development function. It is easy (and normal, even) to be hypersensitive to all of the security requirements and controls your organization has to stay on top of—and easy to forget that they usually aren't a top concern for the rest of the business (or even the general IT org).

There are several things you can do to make sure development personnel have appropriate training on security best practices without starting a shouting match over whose budget it will come out of.

When you’re meeting with other stakeholders as part of the first step, try to identify at least one person in each meeting who either shows enough interest to be a security advocate themselves or knows someone on their team who can step up to the plate. From there, training can be as simple as sending them online training videos and materials, hosting lunch and learns with the advocates and the security team, attending meetings held by local Open Web Application Security Project (OWASP) chapters, or even sending them to security-specific training for their role if there is time, interest, and budget.

Finally, there are tools that can (and should) be considered to help ease the burden of trying to implement security controls around your SDLC. Sub-control 1 specifies needing a WAF (essentially an application firewall that is designed for web-facing applications), application firewalls for non-web-facing applications, and even host-based web application firewalls, as mentioned earlier.

Implementing security control gates within your current SDLC process or building it into a new process does not have to be a daunting task. Just remember to break it up into smaller chunks throughout the process, rather than waiting until the end.

Ready to get your application software security streamlined and under control? Start a free trial of InsightAppSec today

This is a continuation of our CIS critical security controls blog series, which provides educational information on each control, as well as tips and tricks to consider. See why SANS listed Rapid7 as the top solution provider addressing the CIS top 20 controls.

Developing out a shiny new security program but neglecting to train your employees on it is like shipping out this year’s hottest new product but forgetting to stash the instruction manual in the box. While your users may have a basic understanding of antivirus and web filtering controls or even patches, they likely aren’t as aware of all the behind-the-scenes work you did to get the underlying programmatic elements of your security program in place.

The key principle behind Critical Control 17 is implementing a security awareness and training program that instructs employees on best practices, current defense strategies, and what is expected of them.

Baseline Security Awareness at Your Organization

Start by developing out materials that provide a basic overview of the expectations your business has for IT system usage. This can often be part of an acceptable use policy (AUP), but plenty of information security teams also create a document that lives outside of an AUP.

These rules are the starting point for your security awareness program. New hires should see and acknowledge them within their first week, and all employees should be reminded of them once a year. Requiring everyone to regularly sign off on the policy means no one can claim ignorance in the event of an incident.

Annual video trainings on broad, high-level security concepts are a good way to get new employees up-to-speed and reinforce your baseline rules. These should also be consistently updated so they’re always relevant.

Security Awareness Should Mature Over Time

With your baseline rules set, it’s time to start thinking about ongoing security awareness training that addresses new technologies, threats, and business requirements. As everyone knows, the security landscape is always changing, so it’s important to keep staff informed and up-to-date.

It can sometimes take a few formats for your message to really stick. These are a few ideas for different ways to get people on board:

Create posters (like this one) and place them around your office, in break rooms, and near copiers and printers.

Develop a phishing awareness training program that includes simulated phishing campaigns targeted at your own employees. Keep statistics on what percentage of your workforce is susceptible to these emails, then require them to undergo additional training. You can also determine whether social engineering training is effective by tracking those click rates.

Create and distribute quarterly information security newsletters.

Recognize employees who go above and beyond to report potential incidents so people know they are an important aspect of your security program.

Zero In on Security Topics Relevant to Your Business

Beyond these common topics, you should also offer targeted trainings based on best practices your workforce has failed to adhere to throughout the year or the industry you work in. For instance, hospitals could enhance their HIPAA trainings to focus on security best practices, or retailers could delve into some PCI-DSS topics. Training can also be tailored to certain roles within your organization through special programs for your service desk, senior executives, and their administrative support staff.

Targeted trainings should be held on a quarterly basis and be mandatory—tracking participation will ensure no one falls through the cracks. Getting buy-in from senior management and HR will also help you take action if members of your workforce don’t complete the training.

Don’t Patronize, Empower

It’s important to avoid falling into the trap of considering your workforce to be a nuisance—or worse yet, people who couldn’t possibly understand what you do and try to undermine your efforts by acting, well, stupidly. What comes from this is a hostile relationship that can upend all the work you’ve put into securing your organization against threats.

Instead, empower your workforce. They are the frontline defenders of your organization and should be trained on how to identify and report the most common indicators of an incident.

For instance, imagine an employee realizes something isn’t quite right after he clicks on a malicious link from a phishing email. If he feels IT will respect him and reserve any judgement, your team could be alerted right away and begin triage activity. However, if he feels IT treats him poorly when he reports such issues, he may be less likely to reach out and instead put his head in the sand. This approach does nobody any good.

Additionally, make sure your workforce knows how to reach the IT and ITSEC teams. If you have a service desk, they should be included in the training as well.

Putting all of these elements in place takes time, but the work is well worth the effort. Security is everyone’s responsibility, and a fully engaged workforce will wind up simplifying the job of your information security team and make sure your program works.

Need Help Developing a Security Program at Your Organization? We Can Support You!

What is CIS Critical Control 16?

In the world of InfoSec, the sexy stuff gets all the attention. Everybody wants the latest and greatest next-gen product to get rid of the APTs and h4x0r$ hiding within their networks.

But what if I told you…

You don’t need all those bells and whistles to have a great security program? Specifically, by following CIS Critical Control 16: Account Monitoring and Control, which focuses on processes to manage the lifecycle (creation, use, dormancy, and deletion) of system and application accounts, you can do much good by practicing one of the most unsexy parts of InfoSec:

Controlling what accounts live and die, and when

Setting simple configuration settings to reduce the risk of an account being compromised, and to enable you to recognize when it is

Enforcing two-factor authentication

Let’s take a little bit deeper dive on these three practices, shall we?

Account Lifecycle Management

Managing the life cycle of system and application accounts is one of the most effective controls you can have in place to protect your organization. Attackers spend their time poking around, harvesting credentials from successful phishing attacks and compromised websites, and want to use these to gain a foothold into your environment. If you don’t have a sound policy for account management, that guy in DevOps that got fired six months ago for, um, inappropriate web browsing at work? And nobody disabled his accounts? Yeah, there’s a good chance his credentials have been compromised, and once you are identified as a target, they will do everything they can to get in. While this is a fictional example, for many organizations, the example could be contractor accounts that have not been removed or disabled, or orphaned service accounts, that can be leveraged against you to gain access to and move laterally inside your organization.

You don’t know what you don’t know, so an in-depth review is in order to determine what kinds of accounts you have, what accounts are still active, and which ones are no longer valid/not in use. Armed with those results, sit down with your HR team, and ask (politely) that security be a part of onboarding and offboarding all users. If you already do this, great! Skip to the next paragraph. If not, work with HR to develop a communication process so that security is made aware when someone is hired/fired/quits/sabbatical, so you can take the appropriate steps. Ideally, you want to be able to disable all access within minutes of an individual leaving the organization. You’ll also want to have a policy around how long to keep dormant accounts, before they are deleted as well.

Configuration Settings

There are a lot of different settings you can set, buttons you can push, and configurations you can “configurate,” that can have a very positive impact on your security posture, without making life difficult for end users. CIS Critical Control 16 actually spells out some discrete settings, such as

Automatically log user off after a set period of inactivity

Set lock screens on devices

Monitor for stale accounts that may have fallen through the cracks

Use account lockouts

Setting accounts to expire at regular intervals, based on business need and risk appetite

Centralize authentication from a single source, such as LDAP

The first four listed above can be set via Group Policy. Voila!

Two-Factor Authentication

Two-factor authentication (2FA) is one of the most effective controls you can implement to protect your organization, but it has to be done with reasonableness and executive support. It would not be wise to place 2FA on every account in the environment, but it certainly makes sense to use it for administrator accounts, dedicated accounts that access sensitive information, and for remote/VPN access. If you don’t use it for anything else, use it to protect VPN access. While not technically impossible, it would be extraordinarily difficult for someone to gain full access using compromised credentials via remote access vectors, if they do not have the required second factor for validation. One of the underlying principles here is to make success so expensive, either in time, resources, or dollars, to make attackers move on to other targets. 2FA for remote access is one of the best bang-for-your-buck controls you can put in place. The catch: If you utilize hardware certificates for 2FA on laptops, it’s effectively rendered useless if you don’t utilize whole disk encryption. If an attacker has credentials, and access to a corporate laptop, it’s game over at that point. And it’s a lot easier to steal a laptop (with requisite certificate, not protected by encryption than it is to steal an iPhone, and crack the PIN.

So there you have it. We’ve covered the high points of CIS Critical Security Control (CSC) 16: Account Monitoring and Control. If you need help developing or implementing your security strategy—or if you don’t have one—Rapid7 has tons of resources that can help. From our virtual CISO service, to maturity assessments of your security program, to penetration testing to validate your controls, to taking over some of the day-to-day operations of your security team—we’ve got you covered. Let us know how we can help you be successful!

Decades ago, your network was a collection of routers, firewalls, switches, wall ports, and what seemed like a million miles of cable. The only way for your employees and guests to access it was to be seated near a port and get plugged in. It was a pretty straightforward proposition but didn’t allow much in the way of mobility or convenience. But hey, it was all we had. As a young admin I used to dread seeing a calendar invitation for a 20-person meeting because I knew it meant setting up the “snake farm” in the boardroom so that every person could bring their laptop. It was nearly always a mess.

At the turn of the century we saw some of the first devices using the IEEE LAN/MAN Standards. We know these standards collectively as IEEE 802 and they changed the face of corporate (and home) networking forever. Our admin pool thought it was a miracle as we brought those first access points online and fitted laptops with PCMCIA 802.11b cards. Sure it was all in the clear, and sitting too close to a cordless phone would cause problems, but we were too excited to care.

What is CIS Critical Security Control 15?

As we detour off of Memory Lane, we now know that wireless access is ubiquitous and even expected in the enterprise. Access is no longer limited to a few lucky executives as now nearly all of your workforce need to be mobile, both in the office as well as on the go. With all these emails, documents, logins, and the like being transmitted around us, we turn our attention to securing this sensitive data. Here at Rapid7 we rely on the Critical Security Controls to guide us, and in the case of Wireless Access Control, we look to CSC 15. The Control itself reads:

“The process and tools used to track, control, prevent and correct the security use of wireless local area networks (LANs), access points and wireless client systems.”

How can you implement CIS Control 15?

It seems pretty straight forward, right? Control your access. But like with so many straight forward things, the devil is in the details. While there are a multitude of ways to reduce and control access to your wireless network, we’ll look at some simple steps that are often overlooked.

Do not broadcast your SSID. Seriously, turn off broadcast. While it’s not foolproof, it will stop most casual “curious types” from trying to have a peek.

Deploy TLS certificates on your main/secure networks. This takes a little extra effort to set up but is far superior because the end-user devices will need the SSL certificate which you control. This also helps with the threat of rogue access points being set up with the same name.

Use WPA2-Enterprise. This forces per-user authentication via RADIUS. Again, it’s more involved than setting a shared WPA2 passphrase but far more secure.

Adjust and limit your radio broadcast levels. Some access points are very powerful and may well broadcast outside of your building. With simple testing you can tweak these levels to get as close to your building as you can.

Perform Wireless (RF) site assessments. This can be performed by professional services organizations or by running some readily available tools on your own. Performing these types of assessments can help in identifying rogue wireless devices on your network as well as verify that the controls you currently have in place limiting wireless to authorized access points are functioning appropriately.

Create a guest network. It’s alarming how many companies don’t do this. Having a segmented, bandwidth limited guest network that does not have access to any of your critical resources will allow your vendors and visitors to get to their emails and VPNs without giving them the keys to the kingdom.

Monitor. Keeping an eye on (and logging) who is connected to what networks will help in the event of an incident. You know who is in your house, right? This is no different.

Wireless access is a convenient, and perhaps even mandatory, component of your overall network. The protection of this component should be an elevated and discrete part of any mature security plan. Knowing who is connected and from where is key. It’s not like there is a cable to chase down.

Ready to get started?. We assist many organizations of different sizes and industries in how to mature their security programs.

Let’s start with some simple, yet often unasked questions. Do you know what critical assets—information and data, applications, hardware, SCADA systems, etc.—exist in your organization’s network? Do you have a data classification policy? Who defines the criticality of systems and information? These are questions many organizations struggle to answer. It’s no wonder companies have difficulty determining which people, computers, and applications have both the need and the right to access these critical assets, and the information stored on them.

CIS Critical Control 14 says that network segmentation should be based on a classification of the information stored on the servers. Let’s dig into what the challenge is and how you can address it.

What’s the problem?

If you carefully examine any number of high profile breaches, attackers are often able to access sensitive data by first accessing systems on the same network segments with much lower criticality scores, and with much less sensitive information. In other cases, sensitive data or services run on the same physical or virtual systems as data or services that are far less critical.

Classification

Data classification needs to be simple, otherwise it will be ineffective. Here’s one example:

Level 1: Data for public consumption. Data that may be freely disclosed.

Level 2: Internal data not for public disclosure.

Level 3: Sensitive internal data that if disclosed, could affect the company.

Level 4: Highly sensitive corporate, employee and customer data.

Based on this classification, would you want to store level 1 data on the same system as level 4 data? Surprisingly the answer may not be as clear cut as you think. If you’re dealing with data on a central file server, or central database server, you may not have a choice. Application servers, ERP systems, and web servers are easier to classify. Regardless, if you are left in a situation where data of different classification levels must reside on the same server, be sure that intermixed data is labeled and classified using the highest classification rating and thus protected accordingly.

To put this in perspective, let’s apply this concept to your personal data. You likely store your own level 4 data (your social security card, your passport, your birth certificate) in a locked cabinet or drawer, or perhaps in a fireproof safe or in a safety deposit box at the bank. In order to access any of these items, there are security controls in place for good reason. Would you be likely to place your grocery list in your safe deposit box? (If you’re Bobby Flay maybe.) Conversely would you be ok if your social security card and grocery list were in the same unlocked drawer in your kitchen?

SEG/ME/NT

The first step is ensuring that your network is segmented based on the classification described above. CIS Critical Control 14 states that network segmentation should be based on the label or classification level of the information stored on the servers. All systems with data classified as sensitive should be located on separate VLANS with firewall filtering.

As outlined in CIS Control 5 one of the main reasons Rapid7 recommends that organizations remove local desktop administrative privileges is to reduce the ability of attackers to move laterally from a compromised system. Similarly, Control 14 recommends that all network switches enable Private VLANS to reduce the ability of an attacker to communicate to other devices on the same subnet from a compromised system. Recovering from one compromised system is painful enough; don’t make it easier for attackers.

Access control lists

OK, great, you’ve segmented your network or are on your way. What further controls around sensitive data are a must? Let’s discuss Access Control Lists. Do you give a key to your house to everyone in your neighborhood? Even Martha Stewart is not that polite. No, you want to ensure that all information stored on systems is protected by access control lists. This includes file system, network share, application, and database information. Following the principle of least privilege, users must only be able to access the information and resources necessary as part of their responsibilities.

Most organizations that Rapid7 Advisory Services consults have Microsoft Windows Active Directory deployed in their environments. Active Directory provides a granular level of security control over access to a wide variety of objects, most specifically for this discussion NTFS files and folders, but this concept can be applied to any system which provides ACL control. Active Directory user and group accounts are a great way to ensure that access to sensitive data is properly restricted on your file servers. If you aren’t interested in the hassle of changing permissions on a bunch of folders, use Active Directory Group Policy. GPOs grant administrators the ability to grant, or deny users or groups access to specific folders. Audit settings to these folders can also be configured through group policy.

Role Based Access Control (RBAC), sometimes called role based security, is an alternative to ACLs, and assigns roles to job functions rather than individuals. Because access rights are assigned to roles, and not users, management of user rights is as simple as assigning a role to the user account, and makes user rights management, especially users changing job functions, easier to manage.

Taking the next step

You’ve segmented your network and are using proper ACLs to control access. Great. As Obi-Wan would say, you’ve taken your first step into a larger world. So what are the next steps you can take to ensure your security controls around data access are robust?

Audit your ACLs and AD users and security groups regularly. Just because you established access 3 or 6 months ago doesn’t mean that your organization has remained static. Users come and go, and change roles within companies all the time. Access creep is a real enemy of security. Ensure that regular reviews occur with a member of your business. Remember, changes to access is not an IT decision.

The most sensitive data you guard should be encrypted at rest, and secondary authentication required to access it.

Audit access to non public and sensitive data. It’s important to know who is accessing this type of information, how often and what they are doing with it. There are many file auditing applications that can assist with this task.

IT should regularly report on stale data, which is any data that has not been accessed for a standard length of time, defined by your business. This data should be archived and removed from your systems.

There’s a lot to digest here, where should I start?

OK, I know. Classification, segmentation, ACLs, encryption, multi factor authentication, auditing. It’s a lot to take in and can certainly be overwhelming. The secret is realizing you are not alone. Many organizations struggle to identify the problems and prioritize solutions. Here’s my recommendation:

Develop an organizational wide Data Classification Policy and apply it to all IT systems, applications, databases, and data. Channel your inner GI Joe—knowing is half the battle. This is directly related to CIS Control 1 and Control 2. You can’t protect what you don’t know you have. You can’t determine who should have access to what until you know the types of data in your environment.

Segment your network based on the information from your classification policy.

Implement access control lists on all systems, and audit not only the ACLs themselves, but also the detailed user access to those systems and data.

Encrypt data both at rest, and in transit, especially when data traverses trust zones.

Data protection is one of the cornerstones of a solid security program, and it is a critical function of the CIA Triad of Confidentiality, Integrity, and Availability. Data protection, as characterized by Critical Control 13, is essentially secure

Data protection is one of the cornerstones of a solid security program, and it is a critical function of the CIA Triad of Confidentiality, Integrity, and Availability. Data protection, as characterized by Critical Control 13, is essentially secure data management. What do we mean by that?

What is CIS Critical Security Control 13?

Secure data management encompasses controls that are managerial, procedural, and technical to prevent data from leaving the environment in an unstructured or unauthorized way. This control overlaps with several other controls, in that success depends on successful implementation of other Critical Security Controls to be effective.

Managerial controls are a vital aspect of data protection. The foundation of a successful implementation begins with executive support for policies that outline what kinds of data the organization has, how it is classified or categorized, and what can and cannot be done with the data. A data inventory is exceptionally useful for understanding your environment and how interconnected systems and subsystems really are. It can also be used to help define data retention requirements and policies. Policies by themselves can’t stop a breach or data leakage, but they can give employees the knowledge of how the organization uses data and what their roles are in protecting that information.

The second type of control utilized in data protection are procedural controls. These are controls that provide structure and consistency within the organization, to protect data. Common procedural controls are performing scans for sensitive information to ensure that it is stored where it is supposed to be stored, and developing processes, procedures, and configurations to ensure that data is routed and stored in the appropriate areas.

Technical controls are what is actually used to protect data, such as encryption, blocking access to known file transfer and email sites, and blocking USB ports Data Loss Prevention (DLP) tools and Privileged Account Management (PAM) tools can also be used to protect data. These controls are actually specifically called out in the sub-controls of Data Protection.

Why is CIS Critical Control 13 Important?

So why is data protection important? In many cases, it’s either a law that you protect certain kinds of data, or you might have regulatory obligations, such as PCI, to make good faith efforts to protect data. Good data management programs utilize all three types of controls—managerial, procedural, and technical—to make sure that you don’t have unnecessary exposure to the axiom, “you don’t know what you don’t know.” If you don’t know what kinds of data you have, you don’t know what you need to protect, where it lives, and what needs to be done to secure that data.

Implementing CIS Critical Control 13

The bad news Managerial controls can be the hardest to implement. They require executive sponsorship, leadership, and funding to set the tone for the organization, and to ensure that resources are available. Everyone, from the CEO down, including the security team, needs to eat the same dog food.

The good news: Procedural and technical controls are usually easier to put in place, and some can be done for little to no cost, such as blocking USB mass storage devices, blocking webmail and file transfer websites (get granular! If there is a business need to access these sites, only allow those with the need to access them). Explore utilizing whole-disk encryption; there’s a free one available on most commercial operating systems in use today. And don’t forget setting appropriate file and folder permissions, and ACLs to restrict access to data to those who have a valid need-to-know. All of these can be done for relatively low cost, and can provide a great foundational layer of data protection for your organization.

The bottom line: We all have to take appropriate steps to protecting our organization’s sensitive data. Rapid7 offers several solutions, such as InsightIDR, Metasploit, and IOTSeeker that can help determine what data is exposed, and if or when users are trying to circumvent controls, or steal data outright. (Never discount the insider threat!) The Rapid7 Advisory Services team is also your ally in evaluating your security program’s maturity, identifying gaps, and providing recommendations and solutions. Protecting you—and your data—is a common goal.

Key Principle: Detect/prevent/correct the flow of information transferring networks of different trust levels with a focus on security-damaging data.

What Is It?

Boundary defense is control 12 of the CIS Critical Controls and is part of the network family. There are ten subsections to this control that cover your DMZ, firewalls and proxies, IDS/IPS, NetFlow, and remote access.

Boundary defense is typically an organization’s first line of protection against outside threats. Today, many attackers focus on exploiting systems that they can reach across the internet; they are constantly probing perimeters for vulnerabilities and information needed to build their attack plan.

Your boundary defense strategy should not just be about keeping the attackers out, but also keeping sensitive information in. I am going to take you through how you can strengthen your perimeter and police the traffic flowing into and out of your network by implementing a comprehensive boundary defense strategy.

How to Implement CIS Control 12

Segment Network and Control Flow

The key to boundary defense is a multi-layered approach focused on efficiently segmenting your networks and controlling the flow of your data. Network segmentation can be strengthened by the use of firewalls and proxies.

A DMZ should be setup between your internal network and the internet. To minimize the impact of an attacker pivoting between systems, configure your DMZ systems to communicate with the internal network via application layer proxies. Configure outbound proxies to filter malicious websites from being visited by end-users. Apply blacklists to block traffic to known malicious IPs, or whitelists to ban access to everything not needed for approved business purposes. There is much debate on blacklisting vs. whitelisting – which is better? Blacklists are restricted to known variables (IP addresses, documented malware, viruses, etc). Blacklisting is simpler to implement, because it blocks the known bad and allows everything else. Blacklists can be automatically updated via scripting. Whitelisting, on the other hand, is fundamentally a better security solution, but it is also an exhaustive process that takes more time, tuning, and resources to monitor and update. Whitelisting can help block things like 0day threats, as these are not known.

Proxies should support the decryption of network traffic and logging of individual TCP sessions to ensure sensitive information is not being exfiltrated out of your network. At your firewall, block all outbound traffic except approved business applications you wish to permit. DO NOT Configure your firewall’s outbound services to allow ANY. It is important that ALL inbound AND outbound traffic is filtered and monitored.

Network-based Intrusion Detection and Intrusion Prevention tools are a great addition to your defense-in-depth strategy. A network-based Intrusion Detection System (IDS) should be setup to alert on attacks. IDSes sniff the traffic flowing through your network in what is called “out-of-band” or “promiscuous mode”. In promiscuous mode, the IDS receives a copy of the packets instead of sitting inline on the network. IDSes use signature-based detection to alert on potentially dangerous or malicious activity and are very helpful in providing visibility into the traffic flowing through your environment.

Where should you place IDS sensors? Consider asking yourself these three questions: What is my risk? What am I trying to monitor and protect? How does the traffic flow in my environment? Major areas include just inside the firewall, on servers at the DMZ, between business partner networks, between untrusted networks (remote access), at wireless access points and between different internal groups/VLANs inside the network.

A network-based Intrusion Prevention System (IPS) should be deployed to complement your IDS. Where IDS devices are “listen-only” passive solutions, an IPS actively defends and blocks unwanted or malicious communications from getting in. IPSes are typically placed “in-line” with your network. In-line mode positions the IPS in the packet flow and allows for real-time responses to stop attacks/violations. IPS devices can drop malicious packets, block traffic based on source address and reset connections. IPS solutions detect attacks based on signatures and anomaly-based behavior. IPSes typically sit directly behind the firewall. The main reason to have an IPS is to block known attacks and control traffic flow on your network.

To be effective, IDS/IPS devices must be tuned and monitored. As mentioned, finely tuned IDS/IPS systems will be a great addition to your defense-in-depth strategy. However, a poorly set-up IDS/IPS will be noisy and may be disruptive to users or degrade network performance. Some open source IDS solutions we recommend are Snort, Suricata, and BroIDS. Most commercial firewall tools offer a network-based IPS.

Collect, Analyze, Monitor

Tying in to CIS Control 6, logs from your firewalls, IPS/IDS, and DMZ should flow through your SIEM solution for correlation, monitoring, and analysis. At the DMZ, full packet headers of traffic flowing through the network border should be recorded, and NetFlow collection and analysis tools should be deployed. There are many uses of NetFlow including monitoring network bandwidth and traffic patterns, monitoring which applications and protocols are using the most bandwidth, detecting Denial-of-Service attacks, and so on. NetFlow is ideal for monitoring communication behaviors over time and detecting attacks without signatures.

Important things to look for when monitoring and analyzing logs from boundary defense tools include:

Back-channel connections

Unusually long TCP connections

Unusual SSH activity

Unusual RDP activity

Connections to undefined ports

Large transmissions over UDP

Unauthorized VPN connections

Network sweeps

Systems interacting with known botnets or bad IPs

Other network behavior anomalies

Remote Access Control

Another subsection of Boundary Defense is remote access. Remote access to the organization’s internal network should also be monitored and tracked. Devices configured for remote access to the internal network should be managed by the enterprise. When connecting to the internal network, the security profile (configuration policies) should be scanned to ensure security configurations and patch levels are up to date.

Additionally, all access allowing users to remotely log in to the internal network should require two-factor authentication. According to the 2017, Verizon Data Breach Report, 81% of security breaches involve weak or stolen credentials, which highlights why authentication is often the weak link in the security defenses of organizations. There are a number of security vendors offering Access Management and Multi-Factor Authentication solutions.

We’ve now passed the halfway point in the CIS Critical Controls. The 11th deals with Secure Configurations for Network Devices. When we say network devices, we’re referring to firewalls, routers, switches, and network IDS

We’ve now passed the halfway point in the CIS Critical Controls. The 11th deals with Secure Configurations for Network Devices. When we say network devices, we’re referring to firewalls, routers, switches, and network IDS setups specifically, but many of these concepts can and should be applied to DHCP/DNS appliances, NAC enforcement appliances, and other solutions, too. The goal is to harden these critical network infrastructure devices against compromise, and to establish and maintain visibility into changes that occur on them—whether those changes are made by legitimate administrators or by an adversary.

The first three of the seven sub-controls state:

11.1: Compare firewall, router, and switch configuration against standard secure configurations defined for each type of network device in use in the organization. The security configuration of such devices should be documented, reviewed, and approved by an organization change control board. Any deviations from the standard configuration or updates to the standard configuration should be documented and approved in a change control system.

11.2: All new configuration rules beyond a baseline-hardened configuration that allow traffic to flow through network security devices, such as firewalls and network-based IPS, should be documented and recorded in a configuration management system, with a specific business reason for each change, a specific individual’s name responsible for that business need, and an expected duration of the need.

11.3: Use automated tools to verify standard device configurations and detect changes. All alterations to such files should be logged and automatically reported to security personnel.

If we distill these concepts, the idea is that we need to begin from a baseline-hardened configuration, then apply strict change management operational and detective controls to any modifications. Easy enough to say, but how do we do that?

Baseline hardening for network devices can be established by either using guides from the vendor (if they are available), or by utilizing an open, peer-reviewed framework such as the CIS Benchmarks or the Defense Information Systems Agency (DISA) Security Implementation Technical Guides (STIGs). Vendor guides may be helpful in offering quicker, more prescriptive advice custom to your platform, but they may not be comprehensive or free of bias, so we don’t typically recommend them as a primary guides. Both the CIS Benchmarks and the DISA STIGs are free; we find that the CIS Benchmarks are often easier to approach for many organizations since the guides are in PDF and are more human-readable. If you’re a current InsightVM or Nexpose customer, you can configure credentialed device scanning and reporting against the CIS Benchmarks to report which settings may be out of compliance. Other vulnerability management solutions may also have the ability to scan against these templates; doing so makes it that much easier to reach and maintain these baselines, and to ensure you receive alerts if a settings change contradicts them.

Speaking of changes, the second facet of these first three sub-controls deals with Change Management: those wonderful tickets that ask for whos, whats, whens, wheres, whys, hows, and what-ifs (back-out plans) of any significant change. If you have any compliance objectives at all you probably have a CM ticketing and approval system. While they’re often grumbled about, they do provide a good support structure for stability and predictability in your day-to-day operations. A well-tuned CM system is as low-friction as possible and allows network and system admins a good ledger of all changes; this lets you virtually wind back the clock and figure out if an unintended interaction of one or more changes has caused an operational issue. A “sliding scale” approach—one that allows a reviewer to determine if a small change only requires minimal information, whereas a large and complex change may require more detailed plans and meetings—is often a better way to tune than a one-size-fits-all CM detail requirement.

The last facet of these first three sub-controls has to do with change detection. Many network configuration manager platforms offer the ability to alert if a change is detected from a previous configuration, and those alerts can be reconciled against a change management system. Such checks and balances can keep network admins honest and offer a detective control against attackers adding accounts or modifying configurations to their advantage (provided the attacker neglects to disable the conduit to the network config management platform, of course). Open-source solutions such as RANCID can provide change detection in this manner as well.

Sub-control 11.4 is relatively straightforward: Manage network devices using two-factor authentication and encrypted sessions. Many network infrastructure devices now can directly integrate with multi-factor authentication solutions. If your 2FA platform of choice doesn’t directly integrate, consider restricting administration to geographically disparate or independently-hosted administrative “jump stations” and implementing 2FA on those stations. We’ll talk more about those in 11.6. In the meantime, the second part of this section talks about the use of encrypted sessions: This means no telnet. Never. Not anywhere. Technically you could tunnel it through an IPSec tunnel, but that’s a lot more work than just turning on SSH v2, testing it, making sure SSH v1 has been disabled because of several security flaws, and then disabling telnet. Seriously, if you do nothing else in this guide, disable telnet everywhere on your network after testing SSH v2.

Sub-control 11.5 is also fairly clear: Install the latest stable version of any security-related updates on all network devices. Surprisingly, when we perform evaluations of customer environments we see a lot of network infrastructure devices that are treated with the “If it ain’t broke, don’t patch it” philosophy. These infrastructure devices are often one or two hops away from laptops or other mobile systems that enter and leave the network frequently (sometimes several times a day), and can present a broad and rich attack surface if not hardened and patched frequently. The days of considering attack surfaces only on your outer boundaries are long gone. The problem is many of these network infrastructure devices require downtime to properly patch and test, and in a substantially complex environment that can overwhelm the number of network engineers available to perform that function. A mixture of high-availability configurations for failover, combined with automation both for patching and post-testing, can go a long way in moving beyond this critical security maturity level. With that said, if you don’t already have this capability and architecture, this can be one of the hardest sub-controls to meet. It should not be considered optional or nice-to-have; patching of all network devices is essential in risk mitigation and proper defense-in-depth.

The second-to-last sub-control is 11.6: Network engineers shall use a dedicated machine for all administrative tasks or tasks requiring elevated access. This machine shall be isolated from the organization's primary network and not be allowed internet access. This machine shall not be used for reading email, composing documents, or surfing the internet.

The aim of this control is to limit the likelihood of an attacker compromising a network engineer’s 'daily-driver' machine and riding into the firewall, router, or switch via an admin channel. By setting up a secured and limited-functionality jump station, you can create a choke point to apply detective and preventive controls that make it much more difficult for an attacker to pass through undetected. Technologies such as endpoint protection agents, session recording, multi-factor authentication, file integrity monitoring, and good workstation hardening can strengthen this, along with stringent limitations on where network devices will accept incoming SSH sessions from (because you disabled telnet, right?) and where they’ll allow connections to. If you disallow connectivity from that jump station out to the internet, email, or document hosting platforms that may carry infection vectors, you significantly limit the risk of compromise from that angle. It’s worth noting again that you shouldn’t create just one jump station, as it would represent a single point of failure. Create a few on geographically disparate or independently-hosted machines. If you’re like us and enjoy having a backup to your backup, you can also leave an aux or console port enabled, but make sure you set up and test alerting such that it gets everyone’s attention if it is ever used; it should be a very rare case and always in an emergency.

The last sub-control further defines how segmented your administrative connectivity should be from other business channels. 11.7 states: Manage the network infrastructure across network connections that are separated from the business use of that network, relying on separate VLANs or, preferably, on entirely different physical connectivity for management sessions for network devices. If you’re going to embark on an internal network segmentation initiative, the easiest and most predictable network to start with is usually your network infrastructure device administration connectivity. These devices are probably not changing location very often; nor are they entering and leaving your network at semi-random intervals. This network is also a great place to try out additional detective alerts such as getting everyone’s attention if there’s an unscheduled network port scan, failed logins to any devices, or any new devices added to this segment. Network administration, backup, and printer networks are favorite places for attackers to hide and attempt lateral movement through your environment; due to their likely static and predictable nature, they should be the first place you can apply much more strict control without impacting business operations.

The next stop is Critical Control 10: Data Recovery Capability. As is the case with all expeditions, the journey tends to be bumpy but thrilling nonetheless. For your safety, please remain seated and keep your hands, arms, feet, and legs inside the train. Thank you, and enjoy the ride. Away we go!

What the Data Recovery Capability Control Covers

Center for Internet Security (CIS) states the following is the key principle of this control: “The processes and tools used to properly back up critical information with a proven methodology for timely recovery of it.” The control standard consists of four criteria which are labelled as foundational elements to a security program; these focus on system backups and testing. The standards are as follows:

Ensure that each system is automatically backed up on at least a weekly basis, and more often for systems storing sensitive information. To help ensure the ability to rapidly restore a system from backup, the operating system, application software, and data on a machine should each be included in the overall backup procedure. These three components of a system do not have to be included in the same backup file or use the same backup software. There should be multiple backups over time, so that in the event of malware infection, restoration can be from a version that is believed to predate the original infection. All backup policies should be compliant with any regulatory or official requirements.

Test data on backup media on a regular basis by performing a data restoration process to ensure that the backup is properly working.

Ensure that backups are properly protected via physical security or encryption when they are stored, as well as when they are moved across the network. This includes remote backups and cloud services.

Ensure that key systems have at least one backup destination that is not continuously addressable through operating system calls. This will mitigate the risk of attacks like ransomware which seek to encrypt or damage data on all addressable data shares, including backup destinations.

Who Cares About CIS Critical Control 10?

As I have in the past, you may be wondering why this matters. So what? Who cares? What’s the big deal here? Well, here’s the thing: Adversarial actors like to muddle in more than just configurations and software. Indeed, attackers occasionally alter data, albeit subtly, on compromised machines, thus potentially contaminating the organization. It’s one thing when a pipe bursts in one isolated house, but it’s a different story and mess when it’s a pipe burst impacting several properties.

This kind of attack has the potential to be catastrophic for any organization but especially for those handling sensitive information (e.g., PII, medical records, etc). Think about the data in your systems. Consider the cascading effects of large-scale contamination, such as the loss of financial reports for a business or health records for a hospital.

How to Implement This Security Control

The good news is that there are ways to prevent large-scale effects, some of which are discussed below. There are several facets to implementation, but policies, processes, and tools related to backups and testing remain central to this control. In the words of revered Lil Jon and the East Side Boys, “Back, back, back it up!” First and foremost, an organization must prioritize backups. Specifically, each system must be automatically backed up on a weekly basis at a minimum. Systems storing sensitive (or critical) information should be backed up more frequently. In addition, backups must be protected—thus neither physical security nor encryption can be neglected; this includes encryption at rest as well as in transit. The determination of what type of encryption—at rest and in transit—can be completed based on data classification (e.g., confidential vs. restricted or internal vs. external), thereby reducing the cost of encryption. Next, backups must be tested, thereby ensuring that all backups are whole and functional.

As part of common procedures and practice, an organization should conduct backup tests quarterly as well as after obtaining new backup equipment. Conducting regular backups and testing, an organization is proactively preparing itself for any malicious doings related to data by potential attackers. While frequent backups are most often recommended for data recovery, organizations may approach data recovery differently based on the their needs. For example, an organization may leverage an alternate system, warm standby, or system function reassignment. Organizations must work within their means to establish the most effective data recovery program for their needs. With robust and implemented restoration procedures, an organization has the ability to use a version that predates an infection; moreover, it reduces the total downtime following an incident.

Like many things in life, practice makes perfect. Or, it at least reduces the chance of a noteworthy problem or setback. In the sport of triathlon, there are three disciplines: swimming, cycling, and running. However, those of us in the triathlon community joke that there is a fourth discipline: the transition between sports. Seasoned endurance triathletes know that solid, speedy transitions are crucial to a successful day. Funny enough, we practice running to our 1-ft x 2-ft foot space, throwing our helmets on/off, and stashing snacks in our race kits for the long ride or run ahead. Just as backups are a small component in an organization’s thorough security program, the transition is a sliver of the race, a truly minuscule part. Nevertheless, a mistake in that three and a half minutes has the potential to add massive hurdles for the duration of the race. It’s a seemingly small thing, but it matters. So, too, do comprehensive and practiced data recovery procedures. While they make up one small piece of the larger security puzzle, well-rounded data recovery procedures are vital. It’s a simple thing, and it could make all of the difference!

If you’ve ever driven on a major metropolitan highway system, you’ve seen it: The flow of traffic is completely engineered. Routes are optimized to allow travelers to reach their destinations as quickly as possible. Traffic laws specify who is allowed in which lanes and at what speeds—carpool lanes, slow lanes, truck lanes, and so on. There are special rules in place to control the passage of hazardous cargo and oversized vehicles. Toll booth signage directs traffic based on payment and vehicle type. And all of this has been pre-defined by a group of civil engineers long before the first cubic yard of concrete is poured.

Engineering like this optimizes for efficiency and prioritizes safety. The same can be done when designing computing systems and considering how data is transported across networks.

The key principle behind Critical Control 9 is management of ports, protocols, and services (PPS) on devices that are a part of your network. This means that all PPS in use within your infrastructure must be defined, tracked, and controlled, and that any corrections should be undertaken within a reasonable timeframe. The initial focus should be critical assets and evolve to encompass your infrastructure in its entirety. By maintaining knowledge of what is running and eliminating extraneous means of communication, organizations reduce their attack surface and give attackers fewer areas in which to ply their trade.

The security control encourages people to examine and eliminate unnecessary PPS on each system. Going back to our road analogy, there is no reason for there to be an aircraft runway on your highway, is there? The same is true of your web server. Do you need to have FTP enabled on there? How about SMTP? Unless you are running your mail server or an open FTP service on your externally-facing web server, which I DO NOT recommend, the answer should be a resounding no. Eliminate those services whenever they’re not necessary. Leveraging hardening guidelines for these systems can be a great starting point, and can be deployed and monitored through configuration management tools.

In the case of off-the-shelf software, where you will be configuring it to run on your network, one way to ensure that you are not over-exposing yourself to risk is during the testing phase. Most server software will come with instructions that inform you which ports are required to run the system and allow you to configure things like communications between applications and databases. Add this information to your documentation of the system. This helps with developing firewall rules and in ensuring that your data protection program and disaster recovery/business continuity plans are kept up to date with the information required for success.

Prior to installation, perform a baseline port scan of the hardened system using your vulnerability scanner (or other freely available applications, such as port scanners and packet capturing tools). Once the system has been installed, perform another port scan and compare the results. Anything required of the system that wasn’t already mentioned in the configuration instructions or additional services should be made known at this time.

Leverage host-based firewalls on your servers, with whitelists configured to ONLY allow communications between the aspects of the systems (e.g., database connections or administrative access from specific IP spaces). Workstations can also use this technology to the same end. All other non-essential communications should be blacklisted, as they are opening the system up to additional risk.

Perform port scans of your infrastructure to understand and control exposure. Developing a baseline should be one of the first things you do. This activity should be taking place not only on a system-by-system basis but also across the landscape as a whole. When a discrepancy is discovered between the known and approved baseline, your setup should allow appropriate stakeholders to receive alerts so they know to investigate the activity and validate its business purpose.

Hit your external IP space by performing port scans against the entire range of external IPs you have assigned. Discovering hosts that shouldn’t be internet-facing can save you a lot of heartache down the line! A number of organizations only scan specific external IP addresses as a part of their vulnerability management programs and there is always a chance that a host may have been placed in the external space by accident. Finding these hosts and moving them onto a VLAN in your internal private IP space is an important part of risk reduction.

Separate critical services on individual host machines. We mentioned SNMP and FTP earlier, but it goes deeper than that. While you may be leveraging your domain controller for DHCP, you certainly should not be including any additional critical services on these boxes. If at all possible, physical segregation is ideal, but in complex modern computing and operational environments, this may not be feasible. Regardless of means of segmentation, enhance the security of the hosts by locking them down to only the required services. In the case of critical services such as DNS, DHCP, and database servers, for example, this is simply a means of ensuring that the attack landscape is kept at a minimum and that attackers would not gain access to multiple lines of advancement to the crown jewels.

Use application firewalls and place them in front of any critical servers. This helps ensure that only the appropriate traffic is permitted to access the application.

Additional information on all of these options can be found in the other controls within the series, specifically, Critical Control 11: Secure Configuration for Network Devices, and Critical Control 12: Boundary Defense.

Once again, the Limitation and Control of Network Ports, Protocols and Services comes down to knowing your environment, having a clear understanding of what is necessary and maintaining that through documentation, observation, testing and validation.

The biggest threat surface in any organization is its workstations. This is the reason so many of the CIS Critical Security Controls relate to workstation and user-focused endpoint security. It is also the reason that

The biggest threat surface in any organization is its workstations. This is the reason so many of the CIS Critical Security Controls relate to workstation and user-focused endpoint security. It is also the reason that workstation security is a multibillion-dollar industry. For the next two posts, I'll be covering the specifics of Controls 7 and 8, which focus on the biggest weak points in Information Security: web browsers, email, and malware. This set of posts is intended to help you understand how to properly control the threat surface without limiting usability.

Email and web access are critical for most day-to-day operations in any organization, but they're also a significant source of attacks and incidents. Properly securing email servers, web browsers, and mail clients can go an extremely long way in limiting incidents that routinely make news headlines. Good configuration and control of email and web browsers is also going to significantly reduce the number of low-level incidents your organization will encounter on a monthly basis.

What the CIS Critical Control 7 covers

Critical Control 7 has eight sections that cover the basics of browser and email client safety, secure configuration and mail handling at the server level. The control pays specific attention to concepts like scripting and active component limiting in browsers and email clients, attachment handling, configuration, URL logging, filtering and whitelisting. The premise of the control is fairly straightforward: browser and email client security are critically important for low-level risk mitigation. If your browsers and email aren't secure, your users and your network aren't either.

It's worth noting that this control as well as Controls 1, 2 and 8 are often directly connected, and tie into quite a few of the other 20 pretty easily. As I mentioned in the posts for Controls 1 and 2, properly implemented browser and email security will improve any organizations security posture with regards to the other controls.

How To Implement it

Since this control touches on a number of typically different IT functions, it's important to have the people who run the various systems implicated on board when working with it. Personally, I love dealing with controls like this, as they have the potential to unify an IT or IT/IS department in terms of strategy and process, which always helps improve security awareness and capacity.

Start with filtering

Successful implementations of Control 7 usually work from two sides: the server/network side and the endpoint configuration/application side. Networking and email server teams should start by limiting how attachments are handled and forwarded from the mail server to clients, and implement content filtering first. In many cases, this is already set up on mail servers for purposes of space management or security, but it's worthwhile to go a step further and ensure that potentially malicious content is being filtered before it reaches any user's inboxes.

Implement SPF, or something similar

At the same time, implementing Sender Policy Framework at a DNS level and on the mail servers should cut down on the amount of spam and malicious traffic that is coming in to the system. It should be noted that while SPF is not an anti-spam measure, it's effective as a control for malicious mail traffic. It's important that the SPF records and implementation include receiver-side verification (this is actually directly mentioned in sub-control7.7). Typically, this section of Control 7 is overlooked, as it's a high-effort measure, but it's worthwhile for a number of reasons, including SMTP traffic reduction, better junk mail sorting, compatibility with other services, and a general reduction in those "I didn't send a message to this person, but I'm being notified that I did" conversations with your colleagues.

It's also worth noting that there are a few pitfalls in implementing it. OpenSPF.org has a good overview of SPF best practices, which should serve as a good starting point. While the CIS recommends SPF, there are alternative systems and strategies. I'd suggest looking into DKIM and DMARC, since they work well in combination with SPF although they're not directly mentioned in the CIS Critical Control 7. If you're using a third-party provider for e-mail, it's assessing this with their personnel; they may have extra expertise on hand, or have done it already.

Configure all the things!

By far, the hardest part of this control is managing the browsers and clients on your network. It's inevitable that there will be roadblocks, but the good news is that there are a number of good ways to handle browser configuration that should both enable your users, and limit the risks from malicious code in websites (and any attachments that do get through your already iron-clad email server). Typically, Rapid7 recommends that workstations have a “2 browser” system- one should be well secured, and seriously limit access for third-party scripting, ads, and any software that hasn't been reviewed by the security team. The second should be used for general Internet access, and anything that is considered remotely risky.

The other configuration should usually be less secure, but limited in use to internal or organization specific services. Usually, this means script based applications or software that require old, out of date or insecure code and components. For example, if your corporate intranet relies on Flash and ActiveX scripts to manage employee benefits, it's probably a good idea to set up a browser so that your users access the intranet with a specially configured browser for that task. This can be as simple as putting a link or batch file on workstation desktops with specific startup info for the browser, or leaving the homepage set to the specific resource in question. I've seen more complex configurations that rely on secure jumphosts, Citrix, or remote desktop and network limitations, but these are often cumbersome for most users, and not recommended.

One last bit of advice

The simple axiom to follow when implementing this control is: You need to make it simple for the users, or they will find a way around it. It's important to consider this when applying controls 7 and 8, because increasing complexity or the effort your users have to put in often leads to privilege misuse or other workarounds to defeat the controls. In this context, it's worth remembering that human error is still the major source of most breaches and incidents. This includes phishing and clickjacking campaigns, which often foil the best of us, despite well configured systems.

A note on URL requests, privacy and security

Subcontrol 7.4 specifically identifies URL request logging as a necessity for the identification of potentially compromised systems. This subcontrol actually overlaps with Critical Control 6, which Cindy Jones discussed in an earlier post. While it's important to have this data for incident response and awareness purposes, it's worth considering how long it's kept, and how it's managed; it's extremely important that the request logs are considered private or limited-access data, and aren't being shared in a way that could put users of your network at risk. This also applies to any TLS or encrypted traffic monitoring that you may be undertaking.

In your organizational environment, Audit Logs are your best friend. Seriously. This is the sixth blog of the series based on the CIS Critical Security Controls. I'll be taking you through Control 6: Maintenance, Monitoring and Analysis of Audit Logs, in helping you to understand the need to nurture this

In your organizational environment, Audit Logs are your best friend. Seriously. This is the sixth blog of the series based on the CIS Critical Security Controls. I'll be taking you through Control 6: Maintenance, Monitoring and Analysis of Audit Logs, in helping you to understand the need to nurture this friendship and how it can bring your information security program to a higher level of maturity while helping gain visibility into the deep dark workings of your environment.

In the case of a security event or incident, real or perceived, and whether it takes place due to one of the NIST-defined incident threat vectors, or falls into the “Other” category, having the data available to investigate and effectively respond to anomalous activity in your environment, is not only beneficial, but necessary.

What this Control Covers:

This control has six sections which cover everything from NTP configuration, to verbose logging of traffic from network devices to how the organization can best leverage a SIEM for a consolidated view and action points, and how often reports need to be reviewed for anomalies.

There are many areas where this control runs alongside or directly connects to some of the other controls as discussed in other CIS Critical Control Blog posts.

How to Implement It:

Initial implementation of the different aspects of this control range in complexity from a “quick win” to full configuration of log collection, maintenance, alerting and monitoring.

Network Time Protocol: Here's your quick win. By ensuring that all hosts on your network are using the same time source, event correlation can be accomplished in a much more streamlined fashion. We recommend leveraging the various NTP pools that are available, such as those offered from pool.ntp.org. Having your systems check in to a single regionally available server on your network, which has obtained its time from the NTP pool will save you hours of chasing down information.

Reviewing and Alerting: As you can imagine, there is a potential for a huge amount of data to be sent over to your SIEM for analysis and alerting. Knowing what information to capture and retain is a huge part of the initial and ongoing configuration of the SIEM.

Fine tuning of alerts is a challenge for a lot of organizations. What is a critical alert? Who should be receiving these and how should they be alerted? What qualifies as a potential security event? SIEM manufacturers and Managed Service Providers have their pre-defined criteria, and for the most part, are able to effectively define clear use cases for what should be alerted upon, however your organization may have additional needs. Whether these needs are the result of compliance requirements or you needing to keep an eye on a specific critical system for anomalous activity, defining your use cases and ensuring that alerts are sent for the appropriate level of concern as well as having them sent to the appropriate resources is key in avoiding alert fatigue.

Events that may not require immediate notification still have to be reviewed. Most regulatory requirements state that logs should be reviewed "regularly" but remain vague on what this means. A good rule of thumb is to have logs reviewed on a weekly basis, at a minimum. While your SIEM may have the analytical capabilities to draw correlations, there will undoubtedly be items that you find that will require action.

What should I be collecting?

There is a lot of technology out there to “help” secure your environment. Everything from Active Directory auditing tools, which allow you to pull nicely formatted and predefined reports, to the network configuration management tools. There are all flavors out there that are doing the same thing that your SIEM tool can do with appropriately managed alerting and reporting. It should be able to be a one stop shop for your log data.

In a perfect world, where storage isn't an issue, each of the following items would have security related logs sent to the SIEM.

Network gear

Switches

Routers

Firewalls

Wireless Controllers and their APs.

3rd Party Security support platforms

Web proxy and filtration

Anti-malware solutions

Endpoint Security platforms (HBSS, EMET)

Identity Management solutions

IDS/IPS

Servers

Special emphasis on any system that maintains an identity store, including all Domain Controllers in a Windows environment.

Application servers

Database servers

Web Servers

File Servers – Yes, even in the age of cloud storage, file servers are still a thing, and access (allowed or denied) needs to be logged and managed.

Workstations

All security log files

This list is by no means exhaustive, and even at the level noted we are talking about large volumes of information. This information needs a home. This home needs to be equipped with adequate storage and alerting capabilities.

Local storage is an alternative, but it will not provide the correlation, alerting or retention capabilities as a full blown SIEM implementation.

There has been some great work done in helping organizations refine what information to include in log collections. Here are a few resources I have used.

What are the CIS Critical Security Controls?

The Center for Internet Security (CIS) Top 20 Critical Security Controls (previously known as the SANS Top 20 Critical Security Controls), is an industry-leading way to answer your key security question: “How can I be prepared to stop known attacks?” The controls transform best-in-class threat data into prioritized and actionable ways to protect your organization from today's most common attack patterns.

What's new in Version 7 of the CIS Critical Security Controls?

With the release of Version 7 of the controls, CIS has attempted to provide consistency and simplify the wording of each control. It has also simplified the controls by implementing "one ask" per sub-control, making them more precise. There is more focus on authentication and application whitelisting, and they now better align with other security frameworks, such as NIST.

Version 7 keeps the same 20 controls but now separates them into three distinct categories: basic, foundational, and organizational. The basic controls (1—6) should be implemented in every organization for essential defense readiness. The foundational controls (7—16) are the next step up from the basic controls, while the organizational controls (17—20) focus more on people and processes.

Achievable Implementation of the CIS Critical Security Controls

The interesting thing about the critical security controls is how well they scale to work for organizations of any size, from very small to very large. They are written in easy to understand business language, so non-security people can easily grasp what they do. They cover many parts of an organization, including people, processes and technology. As a subset of the priority 1 items in the NIST 800-53 special publication, they are also highly relevant and complimentary to many established frameworks.

As part of a Rapid7 managed services unit, the Security Advisory Services team at Rapid7 specializes in security assessments for organizations. Using the CIS Critical Security Controls (formerly the SANS 20 Critical Controls) as a baseline, the team assesses and evaluates strengths and gaps, and makes recommendations on closing those gaps.

The Security Advisory Services team will be posting a blog series on each of the controls. These posts are based on our experience over the last two years of our assessment activity with the controls, and how we feel each control can be approached, implemented and evaluated. If you are interested in learning more about the CIS Critical Controls, stay tuned here as we roll out posts weekly. Thanks for your interest and we look forward to sharing our knowledge with you!

The definitive guide of all CIS Critical Security Controls

As the blog series expands, we'll use this space to keep a running total of all the 20 CIS Critical Controls. Check back here to stay updated on each control.

Control 1: Inventory and Control of Hardware Assets

This control is split into eight focused sections relating to network access control, automation and asset management. The control specifically addresses the need for awareness of what's connected to your network, as well as the need for proper internal inventory management and management automation. Implementing inventory control is probably the least glamorous way to improve a security program, but if it's done right it reduces insider threat and loss risks, cleans up the IT environment and improves the other 19 controls. Learn more.

Control 2: Inventory and Control of Software Assets

The second control is split into 10 sections, each dealing with a different aspect of software management.Much likeControl 1,this control addresses the need for awareness of what's running on your systems and network, as well as the need for proper internal inventory management. The CIS placed these controls as the "top 2" in much the same way that theNIST Cybersecurity Frameworkaddresses them as "priority 1" controls on the 800-53 framework; inventory and endpoint-level network awareness is critical to decent incident response, protection and defense.Learn more.

Control 3: Continuous Vulnerability Management

Organizations operate in a constant stream of new security information: software updates, patches, security advisories, threat bulletins, etc. Understanding and managing vulnerabilities has become a continuous activity and requires a significant amount of time, attention and resources. Attackers have access to the same information, but have significantly more time on their hands. This can lead to them taking advantage of gaps between the appearance of new knowledge and remediation activities. Control 4 challenges you to understand why vulnerability management and remediation is important to your overall security maturity. Learn more.

Control 4: Controlled Use of Administrative Privileges

The ultimate goal of an informationsecurity programis to reduce risk. Often, hidden risks run amok in organizations that just aren't thinking about risk in the right way. Control 5 of theCIS Critical Security Controlscan be contentious, can cause bad feelings, and is sometimes hated by system administrators and users alike. It is, however, one of the controls that can have the largest impact on risk.Discover how reducing or controlling administrative privilege and access can reduce the risk of an attacker comprising your sensitive information. Learn more.

Control 5: Secure Configuration for Hardware and Software on Mobile Devices, Laptops, Workstations, and Servers

This control deals with Secure Configurations for Hardware & Software. The Critical Controls are numbered in a specific way, following a logical path of building foundations while you gradually improve your security posture and reduce your exposure. Controls 1 and 2 are foundational to understanding what inventory you have. The next step, Control 3, is all about shrinking that attack surface by securing the inventory in your network.Learn more.

Control 6: Maintenance, Monitoring, and Analysis of Audit Logs

This control has eight sections which cover everything from NTP configuration, to verbose logging of traffic from network devices to how the organization can best leverage aSIEMfor a consolidated view and action points, and how often reports need to be reviewed for anomalies.Learn more.

Control 7: Email and Web Browser Protections

Critical Control 7 has 10 sections that cover the basics of browser and email client safety, secure configuration and mail handling at the server level. The control pays specific attention to concepts like scripting and active component limiting in browsers and email clients, attachment handling, configuration, URL logging, filtering and whitelisting. The premise of the control is fairly straightforward: browser and email client security are critically important for low-level risk mitigation. Learn more.

Control 8: Malware Defenses

Control 8 covers malware and antivirus protection at system, network, and organizational levels. It isn't limited to workstations, since even servers that don't run Windows are regularly targeted (and affected) by malware. Control 8 should be used to asses infrastructure, IoT, mobile devices, and anything else that can become a target for malicious software—not just endpoints. Learn more.

Control 9: Limitation and Control of Ports, Protocols, and Services

Control 9 covers management of ports, protocols, and services (PPS) on devices that are a part of your network. This means that all PPS in use within your infrastructure must be defined, tracked, and controlled, and that any corrections should be undertaken within a reasonable timeframe. The initial focus should be critical assets and evolve to encompass your infrastructure in its entirety. By maintaining knowledge of what is running and eliminating extraneous means of communication, organizations reduce their attack surface and give attackers fewer areas in which to ply their trade. Learn more.

_Control 10: Data Recovery Capabilities

Control 10 discusses processes and tools used to properly back up critical information with a proven methodology for timely recovery of it. The control standard consists of four criteria which are labelled as foundational elements to a security program; these focus on system backups and testing. Learn more.

Control 11: Secure Configurations for Network Devices, such as Firewalls, Routers, and Switches

Control 11 covers secure configurations for network devices, including firewalls, routers, switches, and network IDS setups; many of these concepts can be applied to DHCP/DNS appliances, NAC enforcement appliances, and other solutions, too. The goal is to harden these critical network infrastructure devices against compromise, and to establish and maintain visibility into changes that occur on them—whether those changes are made by legitimate administrators or by an adversary. Learn more.

Control 12: Boundary Defense

Control 12 covers boundary defense, or an organization's first line of protection against outside threats. There are ten subsections to this control that cover your DMZ, firewalls and proxies, IDS/IPS, NetFlow, and remote access. Today, many attackers focus on exploiting systems that they can reach across the internet; they are constantly probing perimeters for vulnerabilities and information needed to build their attack plan. Learn more.

Control 13: Data Protection

Data protection is one of the cornerstones of a solid security program, and it is a critical function of the CIA Triad of Confidentiality, Integrity, and Availability. Data protection, as characterized by Critical Control 13, is essentially secure data management. Learn more.

Control 14: Controlled Access Based on the Need to Know

Control 14 covers controlled access of the processes and tools used to track, control, prevent, and correct secure access to critical assets such as information, resources, and systems. It’s important to establish a formal classification of your data types in order to define which persons, computers, and applications have a need and right to access them. Learn more.

Control 15: Wireless Access Control

Control 15 covers the processes and tools used to track, control, prevent, and correct the security use of wireless local area networks (LANs), access points, and wireless client systems. With so many emails, documents, logins, and the like being transmitted around us, we must turn our attention to securing this sensitive data. Learn more.

Critical Control 16: Account Monitoring and Control

Control 16 recommends processes to manage the lifecycle (creation, use, dormancy, and deletion) of system and application accounts. To address this control, companies can implement best practices for account lifecycle management, configuration settings, and two-factor authentication. Learn more.

Critical Control 17: Implement a Security Awareness and Training Program

You can put all the work you want into developing out a security program, but the project will remain incomplete if you don’t training your employees on it. Control 17 covers steps to take to ensure your team understands security best practices, current defense strategies, and what is expected of them. Learn more.

Critical Control 18: Application Software Security

Control 18 covers the process of implementing application software security and the various sub-controls that it covers. This entails fostering a relationship with app development and procurement groups, implementing security gates to address the controls, and ensuring you have the proper people and tools in place. Learn more.

Critical Control 19: Incident Response and Management

The key principle of Control 19 is protecting the organization's information and reputation by developing and implementing an incident response infrastructure for quickly discovering an attack and effectively containing the damage, eradicating the attacker's presence, and restoring the integrity of the network and systems. Learn more.

Critical Control 20: Penetration Tests and Red Team Exercises
Control 20 discusses the need for penetration tests and Red Team exercises to consistently evaluate the effectiveness of your organization's security program. Though both can be instrumentally helpful in ascertaining your security standing, they are not quite the same thing. Learn more.