Solving IoT Security - Pursuing Distributed Security Enforcement

For many of us in the Security Industry, the possibility of using Internet of Things (IoT) devices as a launchpad for an attack has been mostly theoretical. However, information obtained after the recent massive distributed denial-of-service (DDoS) attack against the services offered by DYN.com appears to show that the threat is real and immediate.

The definition of IoT is often a little vague. Generally speaking, I consider any device with an IP address associated with it to be some sort of an IoT device, though not all of them are problems. The ones that are the largest source of concern are those that have the following characteristics:

• Have a statically configured administrative account

• Users can set and forget them

• Reach out to an Internet-located service for administrative control (i.e. cloud based management)

• Have no automated patch or update management

Essentially what I am really talking about when using the term IoT are devices with IP addresses where users don’t directly interact with the Operating System on a continual basis. I would argue that even devices like an iPhone or Android-based device are also essentially IoT, even though they don’t meet some of the criteria.

In previous articles we’ve discussed a number of reasons why these devices are security challenges, such as their having no automated updated processes. While we could try and find tools, techniques, and processes to address these issues, given the vast array of devices that qualify as IoT devices, that quickly becomes a daunting, if not impossible task. In reality, that will likely result in stasis setting and nothing getting done, leaving the current gaping security hole in place.

But what if we could actually embed security inspection right into the network itself? For any of these IoT devices to be effective as a malicious device they need to be able to communicate with other devices. The most common and simplest method of communication is via 802.11 WiFi. While that covers direct communications from the device itself, a lot of them also have some kind of configuration interface as well, more often then not located somewhere on the Internet, requiring the devices to attempt to connect to external resources.

If we step back just a little bit, the concept of internal segmentation essentially means to deploy network based security enforcement technology throughout the extended network, even into the cloud, and not just at traditional edge and chokepoints between places in the network. For example, you could deploy network-based security enforcement at the point that the device connects to the network, such as the L2/3 switching layer for traditional wired systems (like in the Data Center), or the wirelessly connected systems now predominantly used by end-users, and then direct traffic into assigned network segments that can be monitored as well as control data that moves from one network segment to another.

This sort of internal segmentation strategy is especially relevant in the virtualization technology space, where I would submit that it’s actually a larger problem than for physical networks. Deploying host-based security technologies for every VM can result in a significant drop in the number of instances that can be hosted on a single system due to all the security processing overhead required for each instance. While there are some solutions that attempt to limit this overhead, these typically only support a very small range of operating environments. The big risk in a virtualized infrastructure is the rate at which malware and other attacks can spread, and the number of systems available to be compromised.

This challenge in the virtualization space provides a ready environment to begin putting internal segmentation practices to use. This is true in part because virtualized networks are often new deployments, but also because the physical effort required to deploy internal segmentation doesn’t exist - there are no firewalls or other enforcement products to physically install, transceivers to install, cables to run, etc.

For some deployments, secure segmentation is an area of critical need since another benefit of virtualization is the ability to keep legacy systems and operating platforms running longer. The challenge is that many of these systems and platforms don’t have current host-based security tools available to them.

When you are looking at designing your virtualization infrastructure, there are multiple ways that you can build internal segmentation right into the process. In fact, these modern solutions actually allow you to pretty much automate the deployment of a security policy whenever you deploy a new VM.

Once you have developed a process for securing your virtualized deployments, you can take all those hard learned lessons and apply them to the more demanding challenges of physical deployments. When doing so, however, don’t ignore the obvious. Small offices, branch plants, retail locations, and deployments with a similar footprint are all easy candidates for the deployment of internal segmentation. Rather then deploying a router/switch/firewall combination at each location, deploy a high port density security solution and connect all the devices in that location directly. These types of deployments are now possible even when a high degree of resiliency is required, which used to be a challenge for some of these deployments.

And at the same time, don’t forget your new cloud deployments. Most cloud computing or infrastructure environments have security tools available as a service. By properly selecting and configuring tools to be compatible with your private cloud and physical security deployments, you can create seamless segmentation and consistent security policy enforcement even into your public cloud environments.

The biggest challenge with deploying internal segmentation in the modern network, and for data center in particular, are growing capacity demands, and the potential amount of new hardware that will need to be installed to meet these requirements. Thinking about this as a distributed processing challenge may help with your design, allowing you to apply only those security technologies you need, and only where they are most appropriate. While the data center tends to be reasonably static, with well defined application stacks that require very high performance, L3/L4 may be sufficient here to inspect and protect internal traffic, with L7 security only being added as a service to a transaction chain when required. There is an increasing trend to treat all IP enabled systems as if they were external, even those devices physically located within our own physical perimeter, it’s no longer sufficient to consider physical location alone to determine the security risk.

It’s unrealistic to expect any distributed deployment to be homogenous, particularly when it comes to security solutions, so consideration of how your different vendors work or integrate with other solutions (without introducing cumbersome encapsulation technologies, like ICAP/WCCP, etc.) is also critical to deploying a functional internal segmentation solution. Wouldn’t be nice if your L2 switching layer could automatically segment a highly compromised device from all other devices to prevent local propagation of malware? For this to happen, your security devices need to be able to inform your transport devices to make a forwarding path change.

Ken McAlpine is a 35+ year veteran of the IT industry and remembers when security was almost exclusively a physical challenge and "the Internet” was a 110 Baud Acoustic Coupler. McAlpine is currently the VP, Network Security Solutions and a member of the CTO Office for Fortinet. Prior to Fortinet, he was self employed and engaged in consulting with international customers primarily focusing on Security Technologies as well as Network and Systems Management technologies.