Saturday, December 31, 2005

Exactly one year ago today I posted a thank-you note for the great year of blogging in 2004. A look at the 2004 statistics shows as recently as July 2004, this blog had less than 6,000 visitors per month, as tracked by Sitemeter. I have no idea how Atom, RSS, and other republishing is affects those statistics. Soon after my first book was published, we broke through the 10,000 per month mark and have never looked back.

As you can see from the 2005 chart above, we're at the 22,000 per month mark now, and broke through 25,000 in August during my coverage of Ciscogate. This blog continues to be a nonpaying venture, despite offers to commercialize, syndicate and repackage the content elsewhere. Others already do this without my permission, but I thank those more responsible people who ask before posting my content elsewhere. For example, I've given the great publisher Apress blanket permission to quote anything I say here. This is my small way to say thank you for the books they've sent me to review.
One of my New Year's resolutions for 2006 is to dedicate specific time early each morning (before my 1 year old daughter wakes up) to read, review, and recommend books. I managed to read and review 26 technical books in 2005, but I have a backlog of over 50 waiting for attention.

I read every book upon which I make comments at Amazon.com, unlike some others who consider a rehash of a book's back cover to be a "review." I also try to avoid bad books, so don't expect too many low-star reviews.

I have found your comments to be one of the best parts of blogging in 2005. I really appreciate hearing what you have to say, either publicly as a blog comment or privately via email. I don't have time to reply to the few of you who send me multi-page running commentaries on everything I publish or blog, but I appreciate your thoughts nevertheless.

In 2006 I plan to continue blogging about subjects which interest me, like network security monitoring, incident response, forensics, FreeBSD, and related topics. I welcome any thoughts on other issues you find pressing. If you want to see how I keep track of world security events, please visit my interests page. Those are my bookmarks; I avoid browser bookmarks whenever possible.

In 2006 I also plan to devote time and resources to OpenPacket.org. Many of you have offered some form of support. As that project develops I will request assistance, either here or on the OpenPacket.org Blog.
2006 should also be a big year for TaoSecurity, my company. I am not sure if 2006 will be the year I decide to hire employees, but I am considering hiring contract help for some in-house coding projects. These projects would support the company's consulting, incident response, and forensics services. Should anything be of use to the wider community, it will appear on the TaoSecurity products page. If you would be interested in working for TaoSecurity, please feel free to send me your resume in .pdf format to richard at taosecurity dot com. I am always interested in meeting security practitioners who can administer systems properly, perform network- and host-centric incident response and forensics, write security tools, speak and publish original material, and seek to save the world one packet at a time.

I have ideas for additional, specialized training courses for 2006. At the moment demand for private 4-day Network Security Operations classes has been strong. I am working with a few different customers to support specialized training outside the core NSO focus. Some of those endeavors may be offered to the public. I will also submit proposals to speak at a few more USENIX conferences, which are public opportunities for training in network security monitoring. I post word of any place I intend to speak at my events list.

I do not have any new books scheduled for writing in 2006. Having authored or co-authored three books in three years, I expect to take a break. I have ideas for more articles like the one in Information Security Magazine. I should have an article in the February 2006 Sys Admin Magazine on keeping FreeBSD up-to-date.

I just registered for the two-day Black Hat Federal Briefings 2006 in Crysal City, Arlington, VA. Tomorrow (1 Jan 06) appears to be the last day to register for the conference at a discounted rate. I decided to pay my way to the briefings because the event is local and the lineup looks very good. The rate until tomorrow is $895, and after that the price is $1095.

Friday, December 30, 2005

Victor Oppleman, co-author of a great book called Extreme Exploits, is writing a new book. The title is The Secrets to Carrier Class Network Security, and it should be published this summer. Victor asked me to write a chapter on network security monitoring for the new book. Since I do not recycle material, I am working on a chapter with new material. I intend to discuss internal monitoring because I am consulting on such a case now.

Do any of you have stories, comments, suggestions, or other ideas that might make good additions to this chapter? For example, I am considering addressing threat-centric vs. target centric sensor positioning, internal network segmentation to facilitate visibility, tapping trunks, new sorts of taps, sensor clusters, and stealthy internal sensor deployment. Does that give any of you ideas?

Anything submitted will be given credit via an inline name reference like "Bamm Visscher points out that..." or a footnote with your name and a reference to "personal communication" or "blog comment." The chapter is due to Victor next week, so I am not looking for any large contributions. A few paragraphs or even a request to cover a certain topic would be helpful. Thank you.

Thursday, December 29, 2005

Ethereal version 0.10.14 was released Tuesday. It addresses vulnerabilities in the IRC, GTP, and OSPF protocol dissectors. Smart bot net IRC operators could inject evil traffic to attack security researchers looking at command and control messages. That's a great reason to not collect traffic directly with Ethereal. Instead, collect it with Tcpdump, then review it as a non-root user using Ethereal.

Wednesday, December 28, 2005

I am happy to announce the availability of the first public Sguil sensor, server, and database in VM format. It's about 91 MB. Once it has been shared with all of the Sourceforge mirrors, you can download it here. I built it using the script described earlier.

So how do you use this? First, you need to have something like the free VMware Player for Windows or Linux. You can also use VMware Workstation or another variant if you like. When you download sguil0-6-0p1_freebsd6-0_1024mb.zip and expand it, you will find a directory like this:

FreeBSD.nvramFreeBSD.vmsdFreeBSD.vmxFreeBSD-000001-cl1.vmdk

By opening the FreeBSD.vmx file in VMware Player, you should be able to start the VM.

Here are some important details.

The root password is r00t.

The user analyst is a member of the wheel group, so it can su to root. The analyst password is analyst.

The user sguil is not a member of the wheel group, so it can not directly su to root. The sguil password is sguil.

The host's management IP is 192.168.2.121. It is assigned the lnc0 interface and it is bridged via VMware.

The netmask is 255.255.255.0 and the default gateway is 192.168.2.1.

The default nameserver is 192.168.2.1.

Interface lnc1 is also bridged. It is not assigned an IP because it is used for sniffing.

You will probably want to change these parameters manually to meet your own network needs. For example, as root and logged in to the terminal:

Make similar changes to the values in /etc/rc.conf if you want the new network scheme to survive a reboot.

You'll probably also want to change /etc/hosts to reflect your new IPs.

Important: As soon as you have network connectivity to the Internet, you must update the system time. When my VM wakes up, it still thinks it is Wednesday night. If you try connecting to it with a Sguil client, the times will not match properly. I recommend running something simple like the following as root on the VM:

ntpdate clock.isc.org

This will validate outside Internet connectivity and update the time. You can also manually set the time with the 'date' command. Note this VM does not have any man pages installed. If you need them for FreeBSD, look here.

Account passwords, for example, should be changed if you want to hook up this VM in any place outside a lab.
Once the VM boots, I recommend logging in to two terminals. In one terminal, log in as user sguil. Execute the three scripts in sguil's home directory, namely the following, in this order:

sguild_start.shsensor_agent_start.shbarnyard_start.sh

This will start the Sguil server, sensor, and Barnyard.

In the second terminal, log in as root. Start the following scripts:

sancp_start.shsnort_start.sh/usr/local/bin/log_packets.sh restart

This will start SANCP, Snort, and log_packets.sh, which uses a second instance of Snort to log full content data.

Once all the components are running, you need to connect to the Sguil server using a Sguil client. I did not install the Sguil client on the VM in order to save space (and to simplify this first round of work).

The easiest way to get a Sguil client running is to download and install the free standard ActiveTcl distribution for Windows. (Yes, Windows has the easiest client install, thanks to ActiveTcl. Linux might be as easy, but I don't have a Linux desktop to test.)

Once ActiveTcl is installed, download the Sguil client for Windows. It is a .zip that you need to extract. Once you do, change into the sguil-0.6.0p1/client directory. You'll see sguil.conf. Make the following edits:

Now create a c:\tmp directory, and make sure you have Ethereal installed if you want to look at full content data in Ethereal.

You're ready to try the client.

Start Sguil by double-clicking on the sguil.tk icon in the Windows explorer. Initially Windows will not know how to run .tk files. Associate this file and other .tk files with the C:\Tcl\bin\wish84.exe program.

The Sguil host is the IP address of the Sguil server. In my VM that is 192.168.2.121. If you leave the demo.sguil.net address, you will connect to Bamm's demo server.

The default port of 7734 is the right port. For the Sguil user and password, the VM uses user sguil, password sguil.

Do not enable OpenSSL encryption. The VM is not built to include that. Select the sensor shown (gruden in the VM), and then click Start Sguil. You should next see the client.

If you want to get Snort to trip on traffic, try using Nmap to perform an OS identification (nmap -O) on the management IP address of the VM.

If you have any questions, please post them here. Better yet, visit us at irc.freenode.net in channel #snort-gui.

My next idea is to add a Sguil client, and document and script the process. That may wait until Sguil 0.6.1 is released however.

My last Sguil Installation Guide, for Sguil 0.5.3 was a mix of English description and command line statements. This did not help much when I needed to install a new Sguil deployment. I essentially followed my guide and typed everything by hand.

Today I decided that would be the end of that process. I am excited by the new InstantNSM project, and I intend to support it with respect to FreeBSD. But for today, I decided to just script as many Sguil installation commands as possible. For items that I couldn't easily script (due to my weak script-fu), I decided to edit the files manually and generate a patch for each one.

This post describes the end result, which you can download at www.bejtlich.net/sguil_install_v0.1.sh. I should warn you that this is not meant for public production use. However, someone trying to install Sguil might find it useful.

The purpose of this script is to automate, as much as possible, the creation of a Sguil sensor, server, and database on a FreeBSD 6.0/i386 platform. The platform is a VMware image whose hostname is gruden.taosecurity.com and whose management IP address is 192.168.2.121. I have stored several files at www.bejtlich.net to facilitate the installation. I will explain where that matters as I progress.

#!/bin/sh# # Sguil installation script by Richard Bejtlich (richard@taosecurity.com)# v1-0 28 December 2005## Tested on FreeBSD 6.0 RELEASE## This script sets up all Sguil components on a single FreeBSD 6.0 system# This is not intended for production use where separate sensor, server,# and client boxes are recommended

echo "Sguil Installation Script"echoecho "By Richard Bejtlich"echo echo "This is mainly for personal use, but it documents how to build"echo "a FreeBSD 6.0 system with Sguil sensor, server, and database"echo "components. The Sguil client must be deployed separately."

First I update the time. I am running this in a VM and time can be problematic. With FreeBSD 6 as a guest OS on VMware Workstation, I create /boot/loader.conf with 'hint.apic.0.disabled=1' to mitigate time issues.

# Update date and time

ntpdate clock.isc.org

Next I set some environment variables. I designate my proxy server, which received heavy use as I tested this script. Note that using a proxy server means copies of patches and other files are cached. To clear the cache after changing a file and uploading it www.bejtlich.net, the process involves stopping Sguid, clearing the cache map with 'echo "" >> /usr/local/squid/cache/swap.state', and restarting Sguid.

I have to install my own version of MySQLTcl. This was not as complicated as Barnyard. The problem with the stock package is that it is compiled against MySQL 4.1.x, and I am using MySQL 5.0.x. Simply building my own package on sguilref, a FreeBSD 6 host with MySQL 5.0.16 installed, is enough to create the proper mysqltcl package.

I do not think this is a bad way to handle the issue, although I welcome simpler suggestions. If you wanted to use my script, for example, you could copy the patches, edit them, and then have the script apply them as shown below. Note this is a place where sensor name and IP address can matter. Note in the above patch the sensor name, gruden, is explicitly mentioned.

Several of the Sguil components, like barnyard, sensor_agent.conf, and SANCP run as user sguil and need to write their PID files to /var/run. I decided to make /var/run mode 777 to let them write to the directory. This is not the best idea, so I might change it.

# Set up /var/run

chmod 777 /var/run

Finally I add the user 'sguil' with password 'sguil' so clients can access the Sguil server.

In this last section I tell how to get all of the components running. By default all of them will run in the background. Each *start.sh script has an option for running in the foreground for debugging purposes, if you uncomment the foreground option and comment out the background option.

Once you have this script installed on a suitable FreeBSD 6/i386 system, you can run it. Here is the partition layout I created, using only 1024 MB. I installed the "minimal" distribution, which is the smallest non-custom distro.

I'm currently working on a VM image of FreeBSD 6.0 with the components needed for a demonstration Sguil sensor, server, and database deployment. I'm using a minimal FreeBSD installation; /usr, for example, began at 100 MB.

I intend to install as many Sguil components as possible using precompiled packages. Unfortunately, the Barnyard package used to read Snort unified output spool files does not contain support for the latest version of Sguil. To deal with this problem, I am creating a custom Sguil package.

I'm not building the package on the host that will eventually run Barnyard. That host, gruden, does not have a compiler and other development tools. Instead I'm working on the package on another FreeBSD 6.0/i386 host, sguilref. First I see what packages Barnyard needs to build.

sguilref:/usr/ports/security/barnyard# make extract===> WARNING: Vulnerability database out of date, checking anyway===> Found saved configuration for barnyard-0.2.0===> Extracting for barnyard-0.2.0=> MD5 Checksum OK for barnyard-0.2.0.tar.gz.=> No SHA256 checksum recorded for barnyard-0.2.0.tar.gz.

At this point I need to edit the Makefile. I make a copy called Makefile.orig for reference. Then I edit the Makefile to include a new option, WITH_SGUIL, that I will be able to use when invoking 'make'. You can see the contents of the new Makefile with the diff command.

I know that pcre-6.4 and snort-2.4.3_1 will be installed when I put Snort on this system. That means I can do a 'pkg_add barnyard-0.2.0.tbz' and the process will only look for pcre-6.4 and snort-2.4.3_1, which will be installed prior to Barnyard.

I plan to submit these steps to the Barnyard package maintainer to see if he might be able to get them merged.

Tuesday, December 27, 2005

Michael W. Lucas wrote FreeBSD 5 SMPng, which does not appear to be online and will be available to non-USENIX members in October 2006. Michael uses layman-friendly language to explain architectural decisions made to properly implement SMP in FreeBSD 5.x and beyond. He explains that removing the Big Giant Lock involved deciding to "make it run" first and then "make it fast" second. Given the arrival of dual-core on the laptop, desktop, and server, with more cores on the way, FreeBSD's SMP work is being validated.

Marc Fiuczynski wrote Better Tools for Kernel Evolution, Please! about the problems with the current Linux kernel development model. I am not sure his proposed solution, C4 (CrossCutting C Compiler), is the answer. As mentioned in the conference report on Marc's talk at HotOS X, "Jay Lepreau commented that the problem is that Linux has a pope model -- there’s only one integrator."

Peter Baer Galvin wrote about Solaris 10 Containers. This article explained some of the concepts behind containers, which are a way to run multiple instances of the same version of Solaris on a single Solaris system. They sound more advanced than FreeBSD jails.

The December Security issue began strong with musings by new ;login: editor Rik Farrow. He makes some great points about weakness in depth. He notes that Microsoft's research OS Singularity, "like [Cisco] IOS, runs entirely in Ring 0, avoiding the performance penalties for context switches -- Singularity can switch between processes almost two orders of magnitude faster than BSD, which goes through context switching. Again, the penalty is the reduction in security by running all processes in Ring 0." Now, I am not even close to being a kernel developer, but I cannot believe Microsoft is toying with the idea of running everything in Ring 0. Is this just hubris on the part of Microsoft's developers? Do they seriously think they are smarter than everyone else who came before, and that they are going to get Singularity "right"?

Last week I ranted against the folly of a "pull the plug" first mentality to host-based forensics. Thankfully, Using Memory Dumps in Digital Forensics by Sam Stover and Matt Dickerson, explains why it is not a good idea to power down immediately.

Getting free copies of these magazines is almost a good enough reason to attend USENIX conferences!

Yesterday I described why the scenario depicted above does not work. Notice, however, that the hub in the figure is an EN104TP 10 Mbps hub. Sensors plugged into the hub see erratic traffic.

If that 10 Mbps hub is replaced with a 10/100 Mbps hub, like the DS108, however, the situation changes.

With a 100 Mbps hub, each sensor can see traffic without any problems. Apparently the original issue involved the 10 Mbps hub not handling traffic from the single interface of the port aggregator tap, which must have operated at 100 Mbps and failed to autonegotiate to 10 Mbps properly.

We also previously explained why the next setup is a terrible idea:

In a very helpful comment to the last post, Joshua suggested the following setup:

This arrangement takes the output of a traditional two output tap and sends each output to a separate 100 Mbps hub. Sensors can then connect one output from each of their two sniffing interfaces to each hub. The sensor must take care of bonding the traffic on its two interfaces. This arranagement is novel because it allows more than one sensor to receive tap output. In the situation depicted, up to seven sensors could receive tap output.

So what is the bottom line? It remains true that hubs can never be used to combine the outputs of a traditional two output tap into a "single interface". However, it is possible to use them in the arrangements depicted in this post.

Monday, December 26, 2005

Several of you have asked about my experiences using FreeBSD sensors inside VMware Workstation. I use VMs in my Network Security Operations class. I especially use VMs on the final day of training, when each team in the class gets access to a VM attack host, a VM target, a VM sensor, and a VM to be monitored defensively. As currently configured, each host has at least one NIC bridged to the network. The sensor VMs have a second interface with no IP also bridged to the network. When any VM takes action against another, the sensors see it. This scenario does not describe how a VM sensor might watch traffic from a tap, however.

I decided to document how to use VMware to create a sensor that sniffs traffic from a tap. I outline two scenarios. The first uses a port aggregator tap with a single interface out to a sensor. The second uses a traditional tap with two interfaces out to a sensor. The VMware Workstation host OS in this study is Windows Server 2003 Enterprise x64 Edition Service Pack 1 on a Shuttle SB81P with a Broadcom Gigabit NIC and a quad port 10/100 Adaptec PCI NIC. I should mention at this point that this scenario is strictly for use in the classroom. I would never deploy an operational sensor inside a VM on a Windows server platform. I might consider running a sensor in a VM on a Linux server platform. Windows is not built for sniffing duties. Even with the DHCP service disabled, I still cannot tell the Windows interfaces to be configured without an IP address. If anyone has comments on this, please share.

The first step I take is to identify the interface I wish to use for management and the interfaces I wish to use for sniffing. A look at the Network Connections for this system shows the following interfaces are available:

I am using one of the Adaptec interfaces as a host management interface. The Broadcom Gigabit NIC is plugged into the single output from a port aggregator tap. Two other Adaptec interfaces are plugged into the two outputs of a traditional tap. The remaining Adaptec interface is not connected to anything.

Three of the NICs are in the process of "Acquiring network addresses" even with DHCP disabled on the server. Overall this output somewhat confusing, especially if you want to match up interfaces to physical NIC ports. Here is output from ipconfig /all:

Windows is calling the management interface Ethernet adapter Local Area Connection 3. You can see it has the highest of the four Adaptec MAC addresses -- 00-00-D1-EC-F5. I do not know why Windows decided to call it LAC 3. LAC 2 is disconnected. LAC (which doesn't have a number at all -- it's simply Ethernet adapter Local Area Connection) is the Broadcom Gigabit NIC connected to the port aggregator tap. LACs 3 and 4 are connected to the two outputs of the traditional tap.

Notice the LAC does not correspond to the name of the interface shown in the screen shots! For example, LAC 3 is called Ethernet Adapter #4. (Why again did I choose to demonstrate this on Windows?)

With our NICs identified, we can match them up to VMware interfaces. Here is the summary page for the VMware Virtual Network Editor.

This screen is a little cramped, so take a look at the next screen shot showing the Host Virtual Network Mapping.

What I'm doing here is specifically assigning VMnets to individual physical interfaces. This will allow me to assign these VMnets to virtual interfaces on each VM, which I do next. Before starting that process, here is the auto interface bridging selection tab in VMware Workstation:

This shows that three of my adapters are specifically selected to not be automatically bridged.

Now let's look at the host configuration for the VM sensor. The box has two interfaces. The first is automatically bridged. The second has a custom setup.

For the first interface, lnc0, it will use the automatic bridge settings to connect to an automatically chosen adapter. This will be LAC3.

The second interface has a custom setting. Here it will listen to the Broadcom Gigabit interface plugged into the port aggregator tap.

Once I boot the sensor VM, I can SSH to its management interface (lnc0) and see ifconfig output:

No problem. Now let's see how we can handle combining dual outputs from the traditional tap.

The first issue is dealing with the limitation of having only three virtual NICs in any VM. To address this, we will redeploy lnc1 to watch one of the outputs from the traditional tap, and create lnc2 to watch the other.

With this setup, Ethernet 2 is watching VMnet 3 and Ethernet 3 is watching VMnet 4.

With the interfaces created and the sensor booted, I bond them with the following script:

I've written about not using taps with hubs in January 2004 and again in a prereview of Snort Cookbook. The diagram below shows why it's a bad idea to try to "combine" outputs from a traditional tap into a hub.

The diagram shows a traditional two-output tap connecting to a hub. Why would someone do this? This unfortunate idea tries to give a sensor with a single sniffing interface the ability to see traffic from both tap outputs simultaneously. The proper way to address the issue is shown below.

A method to bond interfaces with FreeBSD is listed here.
We could avoid the interface bonding issue if we replace the dual output tap with a so-called port aggregator tap, like the one pictured at left. As long as the total aggregate bandwidth of the monitored link does not exceed 100 Mbps (for a 100 Mbps tap), then we can use it as shown below.

What do we do if we have more than one sensor platform? In other words, we may have an IDS and some other device that needs to inspect traffic provided by the port aggregator tap. We might be tempted to do the following, which shows putting the single output from the port aggregator tap into a hub, then plugging the two sensors into the hub.

This is a bad idea. The interface provided by the single port aggregator tap output is full duplex. It will not work properly when connected to the inherently half duplex interface on the hub. When each sensor interface is plugged into the hub, they will auto-negotiate at half duplex as well. Subtle problems will appear when they try to monitor traffic sent from the tap. Consider the following ICMP traffic sniffed using a scenario like that shown above. Host 69.243.40.166 used the -s 256 option for ping to send larger than normal ICMP packets.

The first four packets look ok. Echo request seq 0 is matched by echo reply seq 0; and echo 1 for reply 1. The same doesn't hold for seq 2, which is missing its echo request. Later seq 9 appears, after no problems with ICMP seq 3-8. Suddenly there is no mention of seq 10. Seq 12 is ok, but the echo request for seq 13 is abnormally truncated! Later we see the echo request for seq 24, but no reply. We see the echo request and reply for seq 25, only to be followed by an abnormally truncated eqo request for seq 26. This is definitely troublesome. From the perspective of the host sending the ICMP traffic, no packets were dropped or received abnormally.

The proper way to address this problem, if port aggregation is desired, is to use a dual port aggregator tap, as shown below.

That solution provides a single tap output interface to each sensor.
If one does not want to use port aggregation, and one can have the sensor bond interfaces, something like the Regeneration Tap shown at left can be used. In this case, two outputs are provided for each sensor, and they bond them together to see a single full duplex traffic stream.

Notice that in no circumstances can one combine a tap and a hub. Therefore, taps and hubs never, ever, mix. Remember that this holiday season!

Update: Ok, that is not entirely accurate. It is accurate for the scenarios depicted here, but some creative thinking and a very helpful comment by Joshua resulted in this follow-on post!

What conferences do you attend? Do you think I should try to speak there? Based on your knowledge of my interests (through this blog), what do you think I should discuss? Should I speak to your company or organization? At the moment I have several private Network Security Operations classes on tap for 2006, and my schedule for the first half of the year is already filling.

Every time I attend a USENIX conference, I gather free copies of the ;login: magazine published by the association. The August 2005 issue features some great stories, with some of them available right now to non-USENIX members. (USENIX makes all magazine articles open to the public one year after publication. For example, anyone can now read the entire December 2004 issue.)

An article which caught my eye was Forensics for System Administrators by Sean Peisert. Although the USENIX copy of the article won't be published until August 2006, you can read Sean's copy here (.pdf).

I thought the article was proceeding well until I came across this advice.

"What happens when there is some past event that a system administrator wishes to understand on their system? Where should the administrator, now a novice forensic analyst, begin? There are many variables and questions that must be answered to make proper decisions about this. Under almost all circumstances in which the system can be taken down to do the analysis, the ideal thing to do is halt or power-off the system using a hardware method." (emphasis added)

Is he serious? The article continues:

"[T]he x86 BIOS does not have a monitor mode that supports this [a hardware interrupt]. The solution for everyone else? Pull the plug. The machine will power off, the disk will remain as-is, and there will be no possibility of further contamination of the evidence through some sort of clean-up script left by the intruder, as long as the disk is not booted off or mounted in read/write mode again. The reason for stopping a machine is that it prevents further alteration of the evidence. The reason for halting with a hardware interrupt, rather than using the UNIX halt or shutdown command is that if a root compromise occurred, those commands could have been trojaned by an intruder to clean up evidence."

I can't believe I'm reading this advice in 2005, only 6 days from 2006. This is the advice I heard nearly 10 years ago. "Pulling the plug" as the first step in a forensic investigation is absolutely terrible advice. I am not a host-based forensics guru, but I know that a live response, first described in the June 2001 book Incident Response by Mandia, Prosise, and Pepe, should be part of even the most basic forensically-minded sys admin's techniques. Sean could have even looked into the ;login: archives to find Keith Jones' article in the November 2001 issue describing live response.

Live response is a technique to retrieve volatile information from a running system in a forensically sound manner. Live response can be frustrated by some binary and kernel alteration techniques, but it is a good (non-network-centric) first step whenever a host is suspected of being compromised. Those who want to know more about live response, and see how helpful the advice can be, will enjoy reading Real Digital Forensics.

Sean tries to defend pulling the plug here:

"In our first example intrusion, I took a preliminary look at the syslog and saw that dates of suspicious logins went back at least three weeks. Given that the intrusion seemed to be going on for so long, I decided that I could no longer trust the system to reliably and accurately report evidence about itself. Therefore, pulling the plug on the machine was the best option."

That is a really weak excuse. Certainly a non-ankle-biter attacker will take steps to hide his presence. That does not mean that no attempt should be made to collect volatile system information!

Sean continues:

"It is certainly the case that halting a system can help perserve more evidence, particularly that in swap, slack, or otherwise unallocated space on disk. But it also can destroy some evidence. For example, halting a system will wipe out the contents of memory, hindering the ability of an analyst to dump a memory image to disk. However, in the forensic discussions in this article, slack space and memory dumps are outside the scope of our analysis. In our case, halting a system merely helped to preserve real evidence, and had the intrusion in our first example been discovered sooner, and the system sooner halted as a result, the intruder would have had less time to cover their tracks. Then, as I will discuss, certain helpful log files that were deleted may have been recoverable."

If Sean is worried that an intruder will take actions to "cover their tracks," then the live response can be performed after the victim host has been cut off from the Internet. Sure, the most 31337 attackers may detect this and start self-cleansing procedures, but how often does that happen? Also, collecting live response data does not usually trigger any cleaning mechanisms. The sort of data one collects is the normal information a system administrator might inspect during the course of regular duties.

The fundamental issue here is whether pulling the plug should be the first response activity or not. In my experience, cutting off remote access is the first step. Analysis of NSM data involving the target host is second. Live response is the third. Forensic duplication and analysis is the fourth, if the previous two steps point to compromise and the resources for investigation and available.

This part of the article makes me sad:

"This material is based on work sponsored by the United States Air Force and supported by the Air Force Research Laboratory under Contract F30602-03-C-0075 and performed in conjunction with Lockheed Martin Information Assurance. Thanks to Sid Karin, Abe Singer, Matt Bishop, and Keith Marzullo, who provided valuable discussions during the writing of this article."

First, why is the Air Force paying for advice that should have been abandoned in 1998, the last time I remember the Air Force suggesting these sorts of actions? Second, why didn't any of the article reviewers speak out against this bad advice?

Saturday, December 24, 2005

Yesterday I blogged about reprinted material in Syngress' "new" Writing Security Tools and Exploits. A commment on that post made me take another look at this book in light of other books by James Foster already published by Syngress. Here is what I found.

Chapter 3, "Exploits: Stack" is the same as Chapter 5, "Stack Overflows" in Buffer Overflow Attacks, published several months ago.

Chapter 4, "Exploits: Heap" is the same as Chapter 6, "Heap Corruption" in Buffer.

Chapter 5, "Exploits: Format String" is the same as Chapter 7, "Format String Attacks" in Buffer.

Friday, December 23, 2005

Yesterday I posted a pre-review for Penetration Tester's Open Source Toolkit. I wrote that I thought the two chapters on Metasploit looked interesting. Today I received a review copy of the new Syngress book pictured at left, Writing Security Tools and Exploits by James Foster, Vincent Liu, et al. This looks like a great book, with chapters on various sorts of exploits, plus sections on extending Nessus, Ethereal, and Metasploit.

Metasploit, hmm. I looked at chapters 10 and 11 in Writing and found them to be identical to chapters 12 and 13 in Penetration. Identical! I can't remember the last time I saw a publisher print the same chapters in two different books. I assume James Foster wanted the chapters he wrote for Penetration to appear in Writing because he follows with a new chapter 12 on more Metasploit extensions.

This realization made me remember another Syngress book that I received earlier this year -- Nessus, Snort, & Ethereal Power Tools. I saw that Noam Rathaus had written chapters on Nessus for both Power Tools and Penetration. Could they be the same? Sure enough, chapters 3 and 4 in Power Tools match chapters 10 and 11 in Penetration.

So, 4 out of the 13 chapters in Penetration are published in other books. I would enjoy hearing someone at Syngress explain this, or perhaps one of the authors could comment?

Real thin clients, like the Sun Ray 170, don't run operating systems like Windows or Linux. I like the Sun Ray, since its Sun Ray Server Software runs on either Solaris or Red Hat Enterprise Linux. That's fine for users who want to access applications on Solaris or Linux. What about those who need Windows? I can think of four options:

Run a Windows VM inside the free VMware Player on the Red Hat Enterprise Linux user's desktop.

Trafshow is a ncurses-based program that shows a snapshot of active network sessions in near real time. I like to use it with OpenSSH sessions on sensors to get a quick look at hosts that might be hogging bandwidth. Recently Trafshow 5 became available in the FreeBSD ports tree (net/trafshow), so I have started using it.

When I showed it in class last week, I realized I did not recognize the color scheme depicted in the screen shot above. I learned that the configuration file /usr/local/etc/trafshow controls these colors:

As you can see in the screen shot, we have SSH, WHOIS, ICMP, DNS, IRC, and NTP active.

You may notice records without port information. For example, the 7th record shows source 69.243.40.166 and destination 204.152.184.73 speaking protocol 6 (TCP). No ports are listed. However, the first two records list the two sides of a conversation between those two hosts. Similarly, the last two records show traffic involving 69.243.40.166 and 65.201.175.103, with no ports. If we look at the 9th record, however, we see those two IPs speaking on port 43 TCP (WHOIS).

A quick look at Argus data from yesterday (when I took this screenshot) reveals that the port 43 TCP traffic was the only conversation between those two hosts:

To disable this NetFlow collector function, invoke Trafshow with the '-u 0' option.

One feature of Trafshow 5 that I like is the ability to listen on an interface that does not have an IP address assigned. Previous Trafshow versions would complain and fail if they were told to listen on an interface with no IP.

Thursday, December 22, 2005

Today I received a copy of the new Syngress book Penetration Tester's Open Source Toolkit by Johnny Long, Chris Hurley, SensePost, Mark Wolfgang, Mike Petruzzi, et al. This book appears unnecessarily massive; it's probably 1/2 thicker than my first book, but at 704 pages it's nearly 100 pages shorter than Tao. I think Syngress used thicker, "softer" paper, if that makes sense to anyone.

The majority of the book appears to be the standard sort of hacker stuff one finds in books like Hacking Exposed, with some exceptions. The book contains two chapters on Metasploit which look helpful. I do not know yet how well these Metasploit 2.0-based chapters apply to the new Metasploit 3.0, whose alpha stage was announced last week. Similarly, chapters on Nessus may not hold up well for Nessus 3.0, also recently released.

A major selling point of the new book is its integration of the Auditor live CD. I learned that Auditor is going to merge with "competitor" IWHAX to produce BackTrack in early 2006. Consolidation among similar open source projects to pool resources and create better results? Heresy!

Wednesday, December 21, 2005

Given the recent coverage of wiretapping in the mainstream media, I thought I would point out two excellent articles in the latest issue of IEEE Security & Privacy Magazine. Thankfully, both are available online:

Both concentrate on technical issues of wiretapping. The first concentrates on how to tap a physical line or switch, and ways to defeat those taps. The second describes why incorporating wiretap features into VoIP is a bad idea. Each article discusses relevant laws.

It describes the company's Application-Oriented Networking (AON) initiative. According to this story that quotes Cisco personnel, AON "is a network-embedded intelligent message routing system that integrates application message-level communication, visibility, and security into the fabric of the network." According to this document:

Cisco AON is currently available in two products that integrate into Cisco switches and routers:

Cisco Catalyst® 6500 Series AON module, which is primarily deployed in enterprise core or data centers

Cisco 2600/2800/3700/3800 series AON module, which is primarily deployed at branch offices

"The Cisco AON module in the branch puts intelligent decision-making at the network edge. It can intercept and analyze traffic in various message formats and protocols and bridge between them, provide security, and validate messages, creating a transparent interface between trading partners and, in effect, a good business-to-business gateway. It can manage remote devices that send messages to the Cisco Integrated Services Router in the branch. It can also filter messages from multiple sources that come into the branch router for duplicates or by other criteria, aggregate them, make decisions according to instructions, and transmit selected messages to a sister AON module deployed in the data center." (emphasis added)

I find this aspect very interesting. It sounds like AON could be used to enforce protocol and security policies. I wonder if this might eventually happen on a per-port basis? Security on a per-port basis would allow validation of network traffic itself, not just whether a host should be accessing the network. Per-port security would move the job of enforcing security away from choke-point products like firewalls (which include IPSs, application firewalls, whatever) and into switches.

This is not necessarily a great idea, as this Register article confirms. One of the strengths of the Internet has been the fact that it inverted the telecom model, where the network was smart and the end device (the phone) was dumb. The traditional Internet featured a relatively dumb network whose main job was to get traffic from point A to point B. The intelligence was found in those end points. This Internet model simplified troubleshooting and allowed a plethora of protocols to be carried from point A to point B.

With so-called "intelligent networking," points A and B have to be sure that the network will transmit their conversation, and not block, modify, or otherwise interfere with that exchange to the detriment of the end hosts. As a security person I am obviously in favor of efforts to enforce security policies, but I am not in favor of adding another layer of complexity on top of existing infrastructures if it can be avoided.

"Bob Stephenson, chief technology officer for command, control, communications, computers and intelligence operations at Spawar, said the Navy plans to use the thin-client systems from Sun Microsystems on all major surface ships in the fleet.

As a former Air Force officer, I'm biased towards the Air Force. However, I've written that I think the Air Force is fighting the last war, having decided to adopt "standardized and securely configured Microsoft software throughout the service." Whee, that only took what, 10 years? Kudos to the Navy for stepping forward with an innovative solutions.

Sguil 0.6.0p1 introduced the use of MERGE tables in MySQL to improve database performance.

Sguil 0.6.1, in development now, will bring UNION functionality to database queries. This will also improve performance.

Consider the following standard event or alert query in Sguil. This query says return Snort alerts where 151.201.11.227 is the source IP OR the destination IP. OR is a slow operation compared to UNION. Sguil 0.6.1 will use a new query.

Here we look for Snort alerts where 220.98.198.35 is the source IP address, and use UNION to return those results with alerts where 220.98.198.35 is the destination IP address.

UNION functionality was not available in MySQL 3.x, but it appeared in 4.x. Many Sguil users are running MySQL 5.x now.

Those screen shots just show the WHERE portions of the database queries. Here is each version of similar queries look like in their entirety:

Tuesday, December 20, 2005

This morning I read stories by Brian Krebs and Joris Evers explaining how Guidance Software, maker of host-based forensics suite Encase, was compromised. Guidance CEO John Colbert claims "a person compromised one of our servers," including "names, addresses and credit card details" of 3,800 Guidance customers. Guidance claims to have learned about the intrusion on 7 December. Victim Kessler International reports the following:

"Our credit card fraud goes back to Nov. 25. If Guidance knew about it on Dec. 7, they should have immediately sent out e-mails. Why send out letters through U.S. mail while we could have blocked our credit cards?"

"Guidance stored customer names and addresses and retained card value verification, or CVV, numbers, Colbert said. The CVV number is a three-digit code found on the back of most credit cards that is used to prevent fraud in online and telephone sales. Visa and MasterCard prohibit sellers from retaining CVV once a transaction has been completed."

Reporter Krebs explains the implications:

"Companies that violate those standards can be fined $500,000 per violation. Credit card issuers generally levee such fines against the bank that processes payment transactions for the merchant that commits the violations. The fines usually are passed on to the offending company."

Since Guidance's customers include "hundreds of security researchers and law enforcement agencies worldwide, including the U.S. Secret Service, the FBI and New York City police," I don't think those customers will tolerate this breach of trust.

Why did it take Guidance at least 12 days (from the first known fraudulent purchases on 25 Nov to the reported discovery on 7 Dec) to learn they were owned? I think this is an example of a company familiar with creating host-centric forensic software, but unfamiliar with sound operational security and proper policy, architecture, and monitoring to prevent or at least detect intrusions. Furthermore, who will be fired and/or fined for storing CVVs indefinitely?

Monday, December 19, 2005

I finally got a chance to try Tcpdump 3.9.4 and Libpcap 0.9.4 on FreeBSD using the net/tcpdump and net/libpcap ports. I was unable to install them using packages, so I used the ports tree. I initally got the following error:

# TODO: Add strict sanity check that we're compiling against a# version of libpcap with which this tcpdump release is compatible.#.if defined(TCPDUMP_OVERWRITE_BASE) || !defined(WITH_LIBPCAP_BASE)LIB_DEPENDS= pcap.2:${PORTSDIR}/net/libpcap.endif

I noticed this created when building libpcap-0.9.4:

/usr/ports/net/libpcap/work/libpcap-0.9.4/pcap.3

I also saw this on the system:

/usr/src/contrib/libpcap/pcap.3

So I changed tcpdump's Makefile like so:

LIB_DEPENDS= pcap.3:${PORTSDIR}/net/libpcap

I was then able to finish the installation. (I emailed the port maintainer asking if my fix made sense.) I ran Tcpdump:

To look at the new man page for 3.9.4, I had to tell 'man' where to find the new man pages:

man -M /usr/local/man tcpdump

In the man page I saw the following two options:

-C Before writing a raw packet to a savefile, check whether the file is currently larger than file_size and, if so, close the current savefile and open a new one. Savefiles after the first savefile will have the name specified with the -w flag, with a number after it, starting at 1 and continuing upward. The units of file_size are millions of bytes (1,000,000 bytes, not 1,048,576 bytes)....edited... -W Used in conjunction with the -C option, this will limit the num- ber of files created to the specified number, and begin over- writing files from the beginning, thus creating a 'rotating' buffer. In addition, it will name the files with enough leading 0s to support the maximum number of files, allowing them to sort correctly.

Awesome. Let's try it. Here I tell Tcpdump to save five 10 million byte files using the -W 5 and -C 10 switches.

So, this system works as advertised. Unfortunately, the file naming convention simply adds 0,1,2,3, or 4 to the end of the specified file name of test1.lpc. This is not how Tethereal handles file naming. I am not sure yet if Tcpdump's system will be suitable for my needs. I imagine that when capturing GB-sized files is involved, the file timestamps may be enough to differentiate them?

By the way, people always ask "Why don't you use Tcpdump's -s 0 option to automatically specify a snaplen?" Here's why: