John Pirc from IBM's Network Security Solutions has agreed
to be interviewed by the Securitylab and we certainly thank him for his
time and for sharing his knowledge with us.

John, what can you tell me about how networks have changed over
the years? What is the biggest difference, speed, intelligence, or
reliability?

Stephen, just think back to the original networks where we started
playing with the thick Ethernet cable. How much intelligence does the
big thick Ethernet cable have? Absolutely none. But, over time, we
have been adding devices, firewalls, etc., each of those are a computer
running a software application. We have been increasing the
intelligence built into the network and that trend is going faster and
faster with a significant focus on reliability, consolidation and high
security content fidelity. As we move into the next IT paradigm shift
of a dynamic infrastructure, which includes things like virtualization
and cloud based models, we can start to talk about key network
protection technologies such as Unified Threat Management.

Thank you for that John, how do you define the control zones in an intelligent network?

We have three major dimensions of the network where we can apply
controls: Access Control, Network Awareness & Threat Mitigation,
and finally Content Control. The original security device was the
firewall. Marcus Ranum had to invent the first firewall and put it into
place; since then we have used other devices to give ourselves access
control. After we had access control put in place, we wanted insight
into the network traffic and Todd Heberlein invented the first IDS.
That gave us a sense of what traffic was going over our networks, and
IDS and Network Forensics have continued to evolve. Finally, we have
the need to manage content. After they invented the world wide web, it
was only a matter of time till people started going to dangerous places
and we had to develop web filtering technology that have evolved today
into your data loss prevention tools.

John, I have a strong sense that there is another major change
happening with our networks, they call it convergence, they call it
virtualization, but everything is becoming abstract. What are your
thoughts about that?

Stephen, the key point is you can touch appliances or servers. You can
put a sniffer on the one side and a second sniffer on the other end and
see what the device is doing with network traffic. Today it feels
fuzzy, especially when we have software appliances that provide various
applications inside of a virtual machine. Another example is Software
as Service (SaaS) where instead of running our own commercial
application we might go ahead and just sign up for some kind of a
service. It is convenient, however our data is now stored on the SaaS
provider's server. Which is a great segue into the ultimate buzz word
of 2009 “Cloud Computing” where you are likely to be
running in a multi tenant environment. Abstract, fuzzy, however you
want to define it, but it is a major change.

And as we get fuzzy, the threat landscape is changing, true?

Certainly, who would have guessed we would have to deal with
hacking as a service (HaaS)? Today you can buy or lease exploit code to
attack different operating systems or applications. There are attack
platforms available on a pay per visit or pay per infection basis. If
you can't break in, you get your money back.

The browser has absolutely replaced the operating system in terms of
some of the biggest numbers of attacks that we see today. 51% of all
browser attacks are focused on plug-ins and multimedia vulnerabilities
are following close behind. Adobe, Microsoft Office, etc., are emerging
as predominant targets.

According to FBI statistics, cybercrime has surpassed drug trafficking. It is a one trillion dollar business.

OK, so what you are telling me is that defensive technology is
more fuzzy and the attacks are more specific. Is there a reason for
hope?

Today many security functions may be accomplished by a single physical
device, the Unified Threat Manager or UTM. Firewalls absolutely serve
their purpose, but firewalls won’t stop browser based attacks.
They wouldn’t stop the latest DNS vulnerability. However,
combining firewall SSL, VPN, IPS and content awareness all on one
device provides a high level of assurance against the majority of the
threats that we see today. And when we start looking at the UTM market
the UTM market is growing at such an incredible rate and the majority
of network security vendors have embraced this technology because this
technology is smart, it can be deployed not just at the gateway, it can
be deployed at B2B network, recent M&A etc.

When I talk about commoditized security appliances, I am talking
primarily about firewalls and IPS. These two traditional standalone
point products can be added as functions in the networks that we are
building. We used to put these devices on the "front door" or the
perimeter, but your perimeter is pretty much everywhere nowadays. Also,
we can add these functions into 1 gig networks to 10 g networks.

Instead of fuzzy, the more accurate way to view the coming security
paradigm is levels of abstraction. The concept of the intelligent
framework includes data and event proliferation management. The
framework is made up of people, process, technology, collection and
correlation.

IAM, an acronym for Identity and Access Management, is an up and coming
technology that has become vitality important to this framework. In the
future we will not just log events by machine name or IP address, we
will include the identity of the person behind that event. In
enterprise log management we have events that are propagated to these
multiple collectors. The next step is correlation which is the task of
the Security Information Event Manager, the SIEM.

Ah yes, the SIEM. I know the auditors are pushing log
management for compliance reasons, do you think there is a tangible
return on investment for these technologies?

Stephen, manual analysis and human correlation do not scale, especially
when your network is melting down. Think of working at an enterprise
organization and fighting fires on the collection side without a SIEM.
You have to go to the IDS to see what event triggered. Then you might
have to go to the firewall and pull those logs. There might be routers
or switches with information you need. And how do you know which system
internally is involved; you have to consult the DHCP table, etc. These
activities take a lot of time, a lot of analysis. Meanwhile, the
correlation which really makes up the intelligence of the framework,
and complements the technology related to the collection of events, is
easily automated. The last thing you want to do is have a security
analyst running down rabbit holes when these tasks can be automated and
the professional can put their focus on what happened, what does it
mean, what do we need to do.

In the future we will be working with information that is more powerful
than event data. To really get an operational picture we will use full
session information and continuous packet captures.

With a SIEM, it is really important to understand the scalability of
the solution; one of the most common reasons for deployment failure is
an inadequate database. Other metrics include events-per-second,
that’s important for sizing your network and making sure that you
are buying the correct device from the right vendor. And again, most of
them will work with you and have this type of information.

Another potential gotcha with a SIEM is their integration with
third party event feeds? Do they have an API that can be
leveraged? How much online or network attached storage does the
SIEM have? Also, how much offline storage can it manage?

Some of the SIEMs support automatic responses. As an example, can it
reach back to a router and add an ACL, or make a UTM or firewall rule
change? The Intelligent Network is made up of people, process,
technology, collection and correlation. So, obviously, you want to take
all of the intelligence that we have on the network and boil it down
into some human readable format so you are not spending hours and hours
going through data, that’s really meaningless.

Thanks John. I certainly agree that correlation scaling is a
hard problem, alright. The second most advanced home grown system I
have ever seen is the San Diego Supercomputing Facility. One of the
biggest attacks they discovered was simply because the number of events
they were tracking doubled, so they knew something must be going on. Of
course usually event detection is lot more subtle. Let's talk a bit
about virtualization since that is the way most vendors create UTMs.

OK, Stephen we have come full circle. At one point, servers were so
expensive you ran as much on one as possible. Before timesharing became
a term for vacation rentals, the term was used to define servers that
hosted multiple users and processes like Multics.

After the Morris Worm in 1988 we learned, it might not be good practice
to run a whole bunch of services on a server because if the server goes
down under an attack like the Morris Worm, everything goes down.

For the next 10-15 years good practice was to run one service on a
server. However that is expensive and it is not green to have the
server waiting for hours to perform a task that takes seconds, and so
we are moving towards virtualization. Virtualization is not new, goes
back to IBM mainframes in 1960.

Virtualization provides a lot of benefits, server consolidation,
reduction of carbon footprint, etc. And, since it is software, it can
be attacked. Our X-Force research team has seen significant uptake in
virtualization vulnerabilities. So, with the adoption of any new
technologies such as virtualization over time, you will start to see
vulnerabilities increase. This increase is due to the popularity and
unexplored risk of x86 virtualization.

The traditional threats have the same applicability in the virtual
environment so SQL Injection and Cross Site Scripting are just as
likely to work on a virtual machine. Remember, the applications have no
idea they are being virtualized in the first place. As I said, if the
attack will work against a real machine, it will work on a virtual
machine, but the problem is even greater than that. The virtual
machine is a file that could be easily taken off the system if not
protected correctly.

The ultimate direction we all need to focus on is securing
virtualization by integrating security that was purpose built for
virtual environments.
.

Thanks John, I know there are a number of UTM vendors, do you
have a suggested question that we ought to be asking when considering a
purchase?

One important question is how does a UTM help protect against
browser-based attacks. There is a great paper called All Your iFRAMEs
Point to Us, written by researchers at Google. The paper gives a really
great explanation from start to finish of what a browser attack using
iFrames. A UTM has to understand HTTP Content-Coding or Compression.
This is really important. Much of our web traffic today is compressed.
If you were to run Ethereal or some sort of packet capture on it, you
would see that the packets have content coding for gzip, compress, or
zlib. The UTM must have the capability of doing decompression on the
wire at network speed. The Adobe Flash plug-in vulnerability released
earlier this year is being exploited through compression. If a security
device cannot decompress, you’re at a significant risk to this
attack if you don't have another compensating control such as IBM
Proventia Server, Savant Pro, Bit9 or CoreTrace on the endpoints.

If PCI DSS, the Payment Card Industry and Data Security Standard
applies to your organization, you want a UTM that can help you secure
Primary Account Number (PAN) and other Payment Card data. The ability
to do document parsing and decompress documents is huge. Can your UTM
solution look at traffic carrying Microsoft Office documents, PDFs,
Rich Text Format, XML, etc.,

If credit card information, names or Social Security Card numbers get
out of your network, that can cause a lot of problems, possibly
multiple class action law suits. IPS, UTM, and, some firewall vendors
have these capabilities as well as standalone Data Loss Prevention
(DLP) solutions.

Another feature you need is Protocol independent inspection. It should
not matter what port traffic is coming to or from, the UTM should be
able to determine the protocol.

And, of course, look for third party Independent Verification &
Validation Testing houses like NSS, ICSA, Tolly, etc., are extremely
important then also looking at federal testing. And, don't forget
regulatory compliance.

Thank you John, one of the things we like to do is give people
a bully pulpit, a chance to sound off about what matters to them. Would
you be willing to talk about what IBM offers to help secure the
intelligent network?

Certainly, Stephen. IBM Internet Security Services (ISS) has long
recognized that threats would evolve over time. Besides integrating
with the network and endpoint, protection must also transition easily
into other attack mitigation scenarios that require in-depth protocol
analysis like data loss prevention (DLP), Web application protection,
etc. This means that today we can deliver protection on the
client’s platform of choice, be it appliance, blade or virtual
form factors, and tomorrow we’ll be able to adapt the protection
to deal with virtual server and network environments. We are able to
deliver high fidelity security content provided by X-Force through our
Protocol Analysis Module which can be found in our Network
intrusion-prevention (Proventia GX), Unified Threat Management
(Proventia MX), Message Security (Proventia Mail security and also for
Lotus Notes). Additionally, the Proventia line provides content
inspection similar to that of DLP but limited to Personal Identifiable
Information. Lastly, we provide 3rd party Network DLP solutions through
Fidelis. For a full list of all the security solutions that IBM
provides, check out: http://www.ibm.com/security

Very good, John, I just have one last question. Can you tell us
just a bit about John the person, what are you doing when you are not
in front of a computer?