The Future of Security Is Already Here

Information security has always been a game of doing the most with the least. Security teams have to watch a broad surface area with minimal tools and staff. Security has never been fully funded and because of this security leaders have focused the bulk of their monitoring budget on the biggest perceived threat, and the areas where they could deploy sensors that wouldn’t break the bank.

The biggest perceived threats most security leaders have historically feared were highly skilled hackers. Hacking has mutated from highly skilled people having fun at others expense, to highly skilled AND highly paid professionals looking to steal valuable assets. The early hackers would deface a website or place a flag inside a protected system to show the world they had been there and gain reputation points by bragging in chat rooms and dark Internet boards. Over the last 10 years, hacking has become a major focus for criminal groups, big businesses and governments alike. Hackers steal intellectual property, products, customer lists, credit card information, banking information and personal information for profit. This is not to say that security leaders have ignored employees and others inside the building. Insiders were mostly left to HR policy, process and procedure, and physical security. The security team could only afford equipment facing one way, so the industry chose the outside as the biggest scariest monster to watch.

In its infancy, the security professional did not have many tools. Access control was a matter of running a wire from the mainframe to a terminal. Remote connections were uncommon, and extremely slow. Early security people worked with access control lists, dataset authorization, enforced user ID and password rules, and could physically see who was logged onto the computer by looking at who was sitting at the terminals that were connected. Information security was about making sure the right people could log on and the datasets were in the right storage groups so they were protected properly. Direct access was more the domain of physical security because it required access to the building and the specific floor.

Networks evolved and people moved to other buildings, other cities, and eventually other parts of the world. Access control became an important part of Information Security’s job. Initially we installed firewalls to protect the outside edge of the network and determine who could come in. As intrusion detection became available, we installed those appliances looking at the traffic getting past the firewall to make sure a bad guy hadn’t sneaked past. To control access more tightly, we deployed two-factor authentication and sent tokens to all key personnel.

Why were we monitoring the edges of the network? Two simple answers: the perceived hacker threat was from the outside, and the firewalls and IDS/IPS appliances of day could only monitor a small amount of bandwidth. As recently as a few years ago, firewalls and IDS sensors were considered huge and too expensive if they could watch 1Gb traffic without slowing things down. So we guarded the castle walls, looking through arrow slits, trying to keep the bad guys out.

This strategy is no longer working.

The recent Target breach taught the general public what security people have known for some time: the bad guys are attacking from the inside. Companies trust people inside their building, and whether they are employees or business partners, if they are inside they are trusted. Security people know the bad guys are getting in via many different strategies, and have wished for tools that could monitor the whole network at true internal LAN speeds. 10Gb and 40Gb sensors plugged into the core of the network were the dream. The ability to record what was going on in the network required logging and Security Event and Information Management (SEIM) systems that could ingest and process millions of log lines from thousands of systems every minute—and the terabytes of storage space used to store all those lines. The sensors did not exist, and the logging infrastructure was cost-prohibitive.

The good news is, those appliances are finally here, at cost levels that don’t break the bank. A security team can finally turn their monitoring to the real places of value. We have watched the edge for so long because we could. Most companies typically store their valuable intellectual capital, primary customer database, or financial information on an internal system well inside the network, where they are not exposed to the internet. With new IDS/IPS appliances and fourth-generation firewalls, the security team can add new castle walls around the primary customer database and tightly monitor traffic to and from the key financial information systems. Storage prices have continued to fall, making terabytes of dedicated log storage a reality, so the sensors can report all the information into the SEIM required to find the attacks.

This story of the attacker on the inside has been part of the security lore for decades. Security people have all heard it in conferences and presentations for a long time. Today the ending is new. The security product list that Sirius represents for our customers includes IDS/IPS (Intrusion Detection and Prevention) systems from the Gartner magic quadrant top three. TippingPoint (HP) and Sourcefire (Cisco) both have IDS/IPS devices which can support 40Gb links. Proventia (IBM) will have theirs out by year end. IBM’s Guardium products are designed to specifically watch access and traffic to and from a core database. Fourth-generation firewalls from Cisco, Palo Alto, HP, and Sonicwall all can handle the traffic load to be placed inside the network to create secure segments. SEIM systems like ArcSight (HP) and QRadar (IBM) can process billions of log lines per minute and support back end storage arrays from NetApp, HP, IBM, EMC, and others. RSA offers a passive sensor that can plug into the core of a network and watch traffic patterns for “different” and do it at line speed. Malware detection has moved from signature based desktop virus scanning to the new generation of agents like IBM Trusteer APEX and network based sensors like Sourcefire AMP (Advanced Malware Protection) which can detect malware traffic as it traverses the network. RSA ECAT sits both on the endpoint and in the network to find zero day and advance persistent treats before they are stealing the crown jewels. Code scanning and testing tools like Rational (IBM) and Fortify (HP) can monitor code while it is being written to correct errors like SQL injection, cross-site scripting, and buffer overflow before the code ever leaves development. The same tools can test the applications in QA for more vulnerabilities before it is moved to production and then monitor the applications in production.