Thursday, September 30, 2004

-d Dump the compiled packet-matching code in a human readable form to standard output and stop.

I've never used that option before, but I just saw a Tcpdump developer use it to confirm a Berkeley packet filter in this thread. The user in the thread is trying to see TCP or UDP packets with a source address of "centernet.jhuccp.org" (162.129.225.192). First he specifies an incorrect BPF filter, which the developer then corrects. This is mildly interesting, but the useful information on the -d option appears in this post.

loads the 2-byte big-endian quantity at an offset of 12 from the beginning of the packet - which, on an Ethernet packet, is the type/length field in the Ethernet header - and compares it with 0x0800 - which is the type code for IPv4 - and, if it's not equal, jumps to instruction 8, which returns 0, meaning "reject this packet" (i.e., it rejects all packets other than IPv4 packets);

loads the 4-byte big-endian quantity at an offset of 26 from the beginning of the packet - which, for an IPv4-over-Ethernet packet, is the source IP address in the IPv4 header - and compares it with 0xa281e1c0 - which is 162.129.225.192, or "centernet.jhuccp.org" - and, if it's not equal, jumps to instruction 8 (i.e., it rejects all packets that don't have a source IP address of 162.129.225.192);

loads the one-byte quantity at an offset of 23 from the beginning of the packet - which, for an IPv4-over-Ethernet packet, is the protocol type field in the IPv4 header - and, if it's equal to 6 - i.e., if it's a TCP packet - jumps to instruction 7, which returns 96, meaning "accept this packet and get its first 96 bytes", and, if it's not 6, jumps to instruction 6, which does the same check for 17, i.e. UDP.

I found this explanation very enlightening and I appreciate Guy taking the time to discuss it.

Are you still running Red Hat Linux 7.3 or 9.0? What about Fedora Core 1? If you want to keep those systems patched now that Red Hat has suspended support, consider the Fedora Legacy Project. I just read their advisory for Tcpdump, notifying users of updated libpcap and Tcpdump packages. (Note: The URLs in the advisory are funky. Visit http://download.fedoralegacy.org/redhat/9/updates/i386/ to access the RPMs for Red Hat Linux 9.0 directly.) I used their libpcap and Tcpdump RPMs to patch a system and had no problems.

Wednesday, September 29, 2004

I previously reported my successful installation of FreeBSD on a Soekris net4801. While the Soekris is a really popular small form factor system, it lacks a fan to keep moving components (like laptop HDDs) cool. It's also not the sort of system you can use to replace a tower PC, since it doesn't have video output, a CD, or mouse and keyboard inputs.

If you need the sort of functionality a true PC provides, but want small form factor, check out Padova Technologies. I just installed FreeBSD 5.3-BETA6 on their SlickNode Mini PC. You can see my dmesg output at the NYCBUG dmesg archive. The box is equipped with two NICs -- one is an Intel NIC (fxp0) and the other is unfortunately a Realtek NIC (re0). When you order a SlickNode you can opt for a quad NIC to be installed, or a Wi-Fi card, or several other options. This is a great appliance box for systems that need more of a PC's functionality.

This fall will see the release of upgrades to several open source operating systems I use. First, FreeBSD 5.3 is currently scheduled to be released on 17 October. Over the weekend a sixth beta was cut and a seventh and final beta will be produced this weekend. The following week a release candidate (RC) will arrive. Although no second RC is planned, I expect to see one. The arrival of FreeBSD 5.3 RELEASE will mark the 5.x tree as STABLE. The current STABLE tree, 4.x, will go into maintenance mode. The 6.0 tree is already marked as CURRENT; that's where cutting edge developments are introduced before being "merged from CURRENT" (mfc) to the STABLE tree. I recommend anyone interested in trying FreeBSD for the first time wait until 5.3 is released in mid-October. FreeBSD 5.2.1, the latest in the 5.x tree, arrived in February 2004.

On Monday RC1 for NetBSD 2.0 was announced. NetBSD 2.0 has been several years in the making. The last version, 1.6.2, was a patch release that arrived in March 2004. The last major version, 1.6, was released in September 2002.

If you like regular releases, you can't beat OpenBSD. OpenBSD 3.6 will begin shipping 1 November 2004. OpenBSD 3.5 started shipping in May 2004.

Finally, I'm not much of a Linux user, but I am looking forward to the long-awaited next release of Debian, called sarge. The last discussion of a timetable suggested a September release, but it looks like October or even later is more realistic. The last stable Debian version, woody, arrived in July 2002.

At first glance it seems that upgrades to some of these operating systems, especially NetBSD and Debian, have been few and far between. Then I visited the Microsoft server timeline and desktop timelines. These reminded me that the last desktop release, Windows XP, arrived in October 2001. We got a serious upgrade in XP SP2, but that took 3 years to materialize. On the server side, we have to wait until 2006 for Longhorn. The open source OSs are interesting because they are both client and server operating systems, and still beat the three year Microsoft development cycle.

Saturday, September 25, 2004

Adam Shostack posted a response to my Thoughts on Digital Crime blog entry. Essentially he questions the "bandwidth" of the law enforcement organizations I listed, i.e., their ability to handle cases. The FBI CART Web page says "in 1999 the Unit conducted 2,400 examinations of computer evidence." At HTCIA I heard Mr. Kosiba state that thus far, in 2004, CART has worked 2,500 cases, which may involve more than one examination per case. The 50+ CART examiners and support personnel and 250 field examiners have processed 665 TB of data so far this year! The CART alone spends $32,000 per examiner on equipment when they are hired, and another $12,500 per year to upgrade each examiner's equipment.

This is a sign that the DoJ is pouring money into combatting cyber crime. Of course local and state police do not have the same resources, but especially at the state level we are seeing improvements.

If more resources are being plowed into cybercrime, what is the likelihood that law enforcement will decline from prosecuting juveniles? I believe being a teenager isn't a viable way to escapae prosecution either. During HTCIA I attended a talk by Rick Aldrich, former AFOSI legal advisor. He explained how it has been traditionally difficult to prosecute juvenlile offenders in federal court. The state of California, however, has a special unit set up to investigate and prosecute juvenile cybercriminals. Other states who identify underage intruders now look for ways to get California to prosecute these offenders, due to California's system.

The last way to avoid a trip to the pokey is to hack from overseas locations. A visit to Cybercrime.gov shows plenty of active prosecutions for "hacking," including some foreigners. It's true that the people least likely to be prosecuted are those who physically reside in a country whose law enforcement agencies dislike working with the US government. However, even a country like Romania is working to catch intruders. I still believe all of this does not bode well for low- to mid-level cyber crininals -- you will be caught. Justice may be slow but it does not appear to give up. I have one caveat -- there must be evidence to support a prosecution. If a victim doesn't collect the sorts of high-fidelity data which can show damage and link it to the intruder's action, it's difficult to attract law enforcement's interest.

Examiners must have mastery of the theories, procedures, and techniques necessary to produce reliable results and conclusions.

Standards and Criteria

Digital evidence examiners should have a baccalaureate degree with science courses.

Examiners must have a good understanding of the principles, uses, and limitations of the hardware, software, and the methods and procedures as applied to the tasks performed.

Examiners must have education and training/experience commensurate with the examinations and testimony provided. Independent case examinations must not be undertaken until extensive instruction from a qualified examiner has been completed.

Examiners must have successfully completed a competency test.

A proficiency test must be successfully completed by each examinder at least annually."

There are a few other items in the .pdf, so I recommend reading it or requesting the original documents from ASCLD/LAB itself.

"Three high-risk vulnerabilities have been identified in the Symantec Enterprise Firewall products and two in the Gateway products. All are remotely exploitable and allow an attacker to perform a denial of service attack against the firewall, identify active services in the WAN interface and exploit the use of default community strings in the SNMP service to collect and alter the firewall or gateway's configuration. Moreover, the administrative interface for the firewall does not allow the operator to disable SNMP nor change the community strings. The Gateway Security products are vulnerable to all but the denial of service issue."

"Symantec resolved three high-risk vulnerabilities that had been identified in the Symantec Firewall/VPN Appliance 100, 200 and 200R models. The Symantec Gateway Security 320, 360 and 360R are vulnerable to only two of the issues, which have been resolved."

The days of directly attacking firewalls are not over, as some might think!

"Over the past six months, the average time between the announcement of a vulnerability and the appearance of associated exploit code was 5.8 days... This means that, on average, organizations have less than a week to patch all their systems on which the vulnerable application is running.

Over the first six months of 2004, the number of monitored bots rose from well under 2,000 computers to more than 30,000.

Over the first six months of 2004, Symantec observed worm traffic originating from Fortune 100 corporations. This data was gathered not by monitoring the Fortune 100 companies themselves, but by analyzing attack data that revealed the source IP addresses of attack activity. The purpose of this analysis was to determine how many of these systems were infected by worms and actively being used to propagate worms. More than 40% of Fortune 100 companies controlled IP addresses from which worm-related attacks propagated.

In the first half of 2004, 39% of disclosed vulnerabilities were associated with Web application technologies.

Symantec expects that recent Linux and BSD vulnerabilities that have been discovered and used in proof-of-concept exploits will be used as exploit-based worms in the near future.

[Regarding appliances like SOHO routers, firewalls, and VPN endpoints,] as technical details of these devices have become public, attackers have modified the firmware to provide internal access and even allow attackers to monitor traffic on the network."

I recommend downloading and perusing the whole report.

The Six Secrets report confirmed a few of my opinions. For example, it seems the idea of a "return on investment" (ROI) for security still doesn't convince managers to pay for security:

"Negative factors (such as fear of litigation) remained the primary drivers of security spending. Positive factors (such as contributing to business objectives) were less common."

Paying for security is like buying insurance. Security is an exercise in cost avoidance. There is little or no "return" on an "investment" in security. Paying to prevent or mitigate intrusions as the money spent is not an "investment."

Currently security folks spend time on vulnerabilities and assets, but hardly any on threats. How did this happen?

Organizations began their security evolution by looking at vulnerabilities, which launched the "vulnerability management" craze. At first every piece of infrastructure was considered "critical," which meant nothing was truly important. Once asset value was taken into account, assets were prioritized and vulnerabilities in the most critical assets were addressed first via patch management, access control, and other countermeasures. This process encompasses steps 3-6 above.

Unfortunately, far too many security experts ignore the third element of the risk equation -- threats. Of course there are vendors who sell "Threat Correlation Modules," but these have nothing to do with true threats. Remember a threat is a party with the capabilities and intentions to exploit a vulnerability. An intruder in Denmark with a hatred of Shell Oil and a zero day exploit for Apache is a threat to Shell Oil. A buffer overflow condition in Apache is a vulnerability for Shell Oil if it's running the affected software. A product which offers information on a vulnerability in Apache while identifying the Apache Web servers in an organization with that vulnerability is a vulnerability correlation product, not a "threat correlation module."

So how does an organization acquire the third piece of the risk equation -- threats? The answer is monitoring. I advocate network security monitoring, which is "the collection, analysis, and escalation of indications and warnings to detect and respond to intrusions." Only by acquiring network awareness, primarily through monitoring for suspicious and malicious activity, can one identify and assess threats. Why spend time, people, and equipment securing a vulnerability in SNMP, for example, if hardly anyone is seeking to exploit it?

Until more of the security world realizes that network awareness is just as important as enumerating vulnerabilities and prioritizing assets, the adversary will have the upper hand.

If you'd like to know more about this sort of thinking, chapter 1 of The Tao of Network Security Monitoring addresses the threat equation, defines its components, and offers other commentary.

Thursday, September 23, 2004

"Prior to 'High-Tech Crimes Revealed' (HTCR) I read and reviewed 'Stealing the Network: How to Own a Continent' (HTOAC). While HTOAC is fictional and written almost exclusively from the point of view of the 'hacker,' HTCR is mostly true and written from the law enforcement perspective. On the strength of the cases described in the first half of the book, I recommend HTCR as an introduction to the mindset needed to pursue and prosecute cyber criminals.

Author Steve Branigan brings a unique perspective to his book. In 1986-7 Branigan was a patrolman in the Seaside Heights Police Department, but three years later he investigated telecom incidents for Bell Communications Research. Later work at Lucent and Bell Labs prepared him for co-founding Lumeta in 2000. His experience with telecom security differentiates the book from those who spend more time on Internet-centric crimes."

Monday, September 20, 2004

Do you have any Gmail invitations you don't need? Do you want a Gmail account? If the answer to either question is yes, visit isnoop.net. Their "Gmailomatic" site will accept invitations sent activated by clicking "Invite a friend to join Gmail!" from within your Gmail account. Send the invite to "gmail@isnoop.net" and the invitation will be made available for anyone who requests it through isnoop.net. I donated two invites a few minutes ago. Literally within seconds of seeing the donation count increase by two, both were snatched up by requesters at isnoop.net.

I've been reading David Courtney's Soekris guide. It's incredibly detailed and explains how to install FreeBSD 4.9 and FreeBSD 5.2.1 onto the Soekris net4801. I previously described my experiences with the Soekris, but David's document addresses issues I hadn't considered. For example, he discusses the Soekris BIOS and shows how to navigate it. His setup uses PXE and he installs the OS onto a 2.5 inch laptop hard drive rather than a CF card.

On the lighter side, system administrator extraordinaire Bill Bilano just announced "Severe exploit found, all UNIX are affected!" This was my favorite line:

"Northcutt better take out that section about the Mitnik attack in that terrible book he is always rehasing with only a spit-shine and fancy new cover because here comes something leaner and meaner! (I have re-bought that nut's book eight times and it is always the same old cruft over and over but here wont be a ninth purchase, you bet your pink pajamas!) Someone needs to tell him that SANS is not the MANS! LOL!"

The SNORT_2_3 branch was marked in CVS shortly after I first posted the snort-inline story. Release manager Jeremy Hewlett made the announcement. If you follow the instructions to check out Snort from CVS, be sure to use SNORT_2_3 for your tag and run 'autojunk.sh' before trying to run 'configure'. Remember this is not a new Snort release, only the appearance of new code in CVS.

"Cisco Systems today announced a new line of integrated services routers, the industry's first routers to deliver secure, wire-speed data, voice, video and other advanced services to small and medium-sized businesses (SMBs) and enterprise branch offices, as well as service providers for managed network services offerings. Founded on 20 years of routing innovation and leadership, the new Cisco 1800 Series, Cisco 2800 Series and Cisco 3800 Series integrated services routers are the first to provide customers with an infrastructure that enables fast, secure access to today's mission-critical business applications with optimized security, while establishing a foundation for tomorrow's intelligent networks."

In two sentences we have three references to security, two to speed, and one showing Cisco's attempt to leverage its longevity as a selling point. Cisco doesn't seem to think that routers just need to get packets moved quickly from one node to the next. Now they are security devices. The security features document offers these enhancements:

Not all of these features, like NBAR, are new. What they all need, however, is lots of memory. The Quick Access Routers Quick Reference Guide (.pdf) for the older series of Cisco routers shows much lower DRAM and Flash figures. With the new 2800 series, for example, the 2811, 2821, and 2851 routers offer 64 MB of Flash and 256 MB of DRAM memory by default. My 2651XM originally had 16 MB Flash and 64 MB DRAM. Notice how the ability of a router to become a VPN concentrator, firewall, IDS, and "IPS" is seen as an improvement.

While all the additional features put more capabilities into a single box, I'm not sure I like the complexity and opportunities for exploitation. As the Cisco router becomes more complex and involved in the network, it will be more likely to be compromised. Since no one usually bothers to monitor traffic to and from routers themselves, I see a bonanza for the likes of Phenoelit who specialize in discovering flaws in "appliances" like routers and printers.

The key future development for the Cisco router franchise will be the modularization of IOS, perhaps built on QNX, already present in the latest CRS-1 Carrier Routing System.

In my last story I originally stated "With Windows, unless I deploy a host-based firewall, it is difficult if not impossible to disable unnecessary services." I based this assessment on previous experiences where it was difficult to get a "clean" netstat output (meaning no unnecessary listening services). Getting to this point, as described by books like Securing Windows NT/2000 Servers for the Internet, was difficult and in many cases left services functionally disabled but still in netstat output.

I found an excellent guide by Hervé Schauer Consultants called Minimizing Windows Network Services that takes a step-by-step, netstat-based approach to removing Windows services. After reading the guide, I changed my original Blog entry to say "With Windows, unless I deploy a host-based firewall, it is difficult to disable all unnecessary services."

I base this statement after interpreting advice in the HSC guide. For example, the HSC guide begins by offering this caution for interpreting netstat output:

"The netstat command does not exactly report TCP and UDP ports states... for each outgoing TCP connection, an additional line will appear in netstat output, showing a TCP port in LISTENING state. It is important to make the difference between an opened TCP port and one incorrectly reported by netstat in the LISTENING state. Note: this bug has been fixed in Windows Server 2003."

The document then describes a variety of combinations of 'net' commands and registry tweaks needed to disable various services. Near the end we read this advice, which to me exemplifies what I was trying to convey regarding the complexity of Windows service removal:

"The only remaining opened port is TCP port 135. It is opened by the Remote Procedure Call (RpcSs) service and it is not possible to disable it because this service contains the COM service control manager, used by local processes.

TCP port 135 remains opened because it is used to receive remote activation requests of COM objects. A global setting exists to disable DCOM and can be set in the registry:

Disabling DCOM does not close TCP port 135. To close it, one solution is to remove IP-based RPC protocols sequences from the list that can be used by DCOM. In our case, the sequence ncacn_ip_tcp (transport on TCP/IP) can be removed."

Contrast this with the ease of configuring /etc/rc.conf on a BSD system! I am glad that there are ways to shut down unnecessary services on Windows systems, but I believe guides like this prove that Windows ends up being far more complicated when services need to be disabled.

Wednesday, September 15, 2004

I'm slowly working through the last few days' developments while I attended my 10th reunion at the US Air Force Academy. I recently received the following email:

"I have been reading your book on The Tao of NSM. I am an amateur but very interested in the subject. My only issue is that I am very uncomfortable with your bias against Windows and for the OpenSoftware. [sic] In our market, 95% of the desktops and 55% of the servers are Windows. We do not want to be caught in the emotional battle of OS. Any chance you can recommend a Windows zealot that is as good with the NSM subject as you are?"

This is an interesting question, as I directly address my sentiments on operating systems in chapter 3 of my book. I was also "quoted" on Slashdot recently about OpenBSD, but I can't remember making that statement. (If you know where it came from, email taosecurity at gmail dot com.)

Several factors drive my personal preference for UNIX, or more specifically, FreeBSD-based sensors. Some are personal and some are universal. Many places where BSD appears, Linux and in some cases Mac OS X also applies.

1. Platform Security: One of the primary responsibilities of a security professional is to avoid introducing additional vulnerabilities while deploying people, processes, and products to improve security. The Hippocratic Oath, "First, do no harm," applies. I am not confident that a Windows system can defend itself on the Internet. Configuring a Windows system such that it can operate independently, outside of the protection of a firewall, is not easy. I can quickly disable all services except OpenSSH on a FreeBSD or OpenBSD platform, and not need a host-based firewall. With Windows, unless I deploy a host-based firewall, it is difficult to disable all unnecessary services. Furthermore, Windows' security record pales in comparison to FreeBSD or OpenBSD. A security professional should not have to worry about monthly security updates for his security platform.

2. Network Performance: Aside from the work of people like Fulvio Risso and the Winpcap team, I do not see the level of attention paid to Windows network performance as I do for FreeBSD. I know of a proprietary military intrusion detection system, a commercial packet capture device (Sandstorm NetIntercept), and other platforms deployed on FreeBSD specifically for the robustness of its TCP/IP stack and network performance. On the Linux side work done by Phil Wood and Luca Deri also point to specific network performance enhancements. One of the primary reasons to deploy a sensor is to collect traffic, and no one ever cites traffic collection capabilities as a strength of Windows.

3. Ease of Deployment: Many assume Windows must be the easier OS to deploy since it uses a GUI. Nothing could be further from the truth. GUIs are helpful because they tend to put options in front of the user in menu format. CLIs tend to be difficult because the user must know what series of commands and options must be passed to accomplish a given task. Once the CLI is understood, however, it is easier to accurately replicate and track the actions taken on a CLI system. How does one run script to record actions taken on a Windows GUI?

Beyond the GUI vs CLI, I believe the UNIX model and especially the BSD's OS installation process to be much more suited for building sensors. For example, it is trivial to deploy a very stripped down FreeBSD or OpenBSD sensor using built-in installation options. Fanatics are free to go the extra mile to remove tools in preselected packages, but that is not always necessary. Even deploying the most minimal Windows system still installs a graphical subsystem as part of the Windows kernel.

4. System Administration: I think it is easier to administer BSD or UNIX systems in general. I can do everything I need over OpenSSH, which is installed on the OS (unlike OpenSSH added to a Windows box). I can use OpenSSH over a low bandwidth link if necessary, unlike Terminal Services (VNC is another matter, but is again an add-on). I can check critical configuration files into RCS and track or roll back changes. I can copy these config files easily among machines. There is a defined and well-understood separation between user roles and root users. There is no ports tree for Windows, giving easy access to almost 12,000 applications.

5. Diverse Tools: Most of the tools in my book are UNIX-based because the majority of network security monitoring tools were developed by UNIX programmers. Besides the other four reasons given, this one is a major reason why I know of no "Windows zealot that is as good with the NSM subject" as me. Commercial tools exist, but with ever tighter security budgets I don't see many enterprises having the money to buy them. Open source is more than free -- it's also the power to change tools that don't do what you want. Although I don't see the value in Web-based alert browsers like ACID, I appreciate that a project like Basic Analysis and Security Engine (BASE) could fork the ACID code base to continue development of that tool. Such innovation is just not possible with proprietary tools.

These are the reasons I am an open source advocate and user. I know of several very smart people working for Microsoft and this critique is not intended to attack them. However, I am more confident that my BSD-based security appliances will do the job they were built for, and not become a liability when I deploy them.

"'Stealing the Network: How to Own a Continent' (STN:HTOAC) is a detailed look at the capabilities a structured threat could apply to the world's vulnerable digital infrastructures. Rather than hire a Beltway Bandit, I recommend those planning the digital defense of this nation read HTOAC. This book is more creative, comprehensive, and plausible than what most 'infowar' think-tanks could produce."

Thursday, September 09, 2004

I noticed a post to the snort-inline mailing list last week that announced a "changing maintainer and future plans." Snort-inline is a project which allows a Snort sensor positioned inline (as opposed to sniffing passively offline) to accept packets from IPTables and then make pass/drop decisions. William Metcalf is taking over as lead developer from Rob McMillen, although Rob will remain with the project along with newcomer Victor Julien.

William claims "we have been very busy working on snort_inline and evaluating the snort_inline code that is being integrated into the snort-2.3 source branch. That's right, you heard it here first: snort-2.3 will have snort_inline functionality built into it. Rob, Victor and I will be maintaining and supporting it. We will still maintain snort_inline as a separate project and use it as vehicle for bleeding-edge functionality and honey net-specific features."

This is interesting because of comments Marty made at CanSecWest in April. He said Sourcefire was working on an inline capacity that would not use the existing Snort-inline code. I have not seen anything new appear at cvs.snort.org but I will keep looking.

I am interested in knowing if the inline features of Snort 2.3 will also be largely tied to Linux via IPTables. I asked about FreeBSD support in April, and there have been more recent discussions of support for OpenBSD. FreeBSD support is claimed but I believe it doesn't work on the 5.x tree. I also discussed the issue here last April, but I've never heard of anyone actually getting Snort-inline to work with any BSD system. For those who want to use Snort to inspect and drop traffic, support for BSD would allow running Snort on a very trustworthy platform like OpenBSD or take advantage of new traffic handling developments in FreeBSD.

Tuesday, September 07, 2004

"I'm a huge fan of your newest book, and I read it cover-to-cover in a handful of evenings. However, I have a question about the approach you take for doing network monitoring.

The average throughput of our Internet connection is around 5Mbits/sec sustained. I would love to implement Sguil as an interface to my IDS infrastructure (currently Acid and Snort on the network side), but I ran some numbers on the disk space required to store that much network traffic, and the number quickly swamped the disk resources I currently have available to me for this activity.

Am I missing something with regards to how Snort stores data in this kind of scenario, or do I really need to plan for that much disk space?"

This is a good question, and it is a common initial response to learning about Network Security Monitoring (NSM).

Remember that NSM is defined as "the collection, analysis, and escalation of indications and warning to detect and respond to intrusions." There is no explicit mention of collecting every packet that traverses the network in that defintion. However, NSM analysts find that the best way to accomplish the detection of and response to intrusions is by interpreting alert, session, full content, and statistical network evidence. Simply moving "beyond intrusion detection" (my book's subtitle) -- beyond reliance on alert data alone -- moves one away from traditional "IDS" and towards NSM.

The answer for those operating in high bandwidth requirements is collect what you can. Chapter 2 (.pdf) lists several principles of detection and security, including:

- Detection through sampling is better than no detection.- Detection through traffic analysis is better than no detection.- Collecting everything is ideal but problematic.

I recommend looking at Chapter 2 for more information on these principles.

Someone monitoring a data center uplink or an Internet backbone is not going to collect meaningful amounts of full content data without spending a lot of money on high-end hardware and potentially custom software. You may only be able to collect small amounts of full content data in response to specific problems like identification of a covert back door. You may have to take a cue from Internet2 and analyze NetFlow data, and hardly look at full content at all.

There is nothing wrong with either approach. The idea is to give your analysts as much supporting information as possible when they need to make a decision concerning suspicious or malicious traffic. Only giving them an alert, with no other context or non-judgement based data, makes it very unlikely the analyst will know how to make an informed validation and escalation decision.

My specific answer for the question at hand would be to try deploying Sguil with a conservative full content collection strategy. Pass BPF filters in the log_packets.sh script to limit the full content data collected on the sensor. Additionally, if you find the amount of session data logged by SANCP to be a burden, you can pass filters to SANCP as well.

If at all possible I advise not filtering SANCP and other session collection mechanisms, as these content-neutral collection measures can really save you in an incident response scenario. If SANCP and database inserts are a problem, consider the more mature code base of Argus or collecting NetFlow records from routers you control. My book also outlines how to do this.

Update: My buddy Bamm Visscher points out that a sustained 5 Mbps throughput is 2250 MB per hour or 54000 MB per day in raw traffic. However, some overhead is needed for libpcap headers. For small packets, the header could be as large as the content, effectively doubling the disk space needed to record that packet. For large packets, the header is a smaller percentage of the overall record of the packet.

Anectdotal evidence from one of Bamm's friends says a link with sustained 10 Mbps is writing about 8 GB per hour to disk.

Speaking conservatively for the original question of 5 Mbps, a sensor like a Dell PowerEdge 750 with two 250 GB SATA drives can hold at least several days worth of libpcap data, and potentially up to a week. That's plenty of time to retrieve useful full content data if regular monitoring is done.

Over the weekend I learned about BlogShares.com, a fantasy stock market for Blogs. It was originally created by Seyed Razavi, but he turned over management of the project late last year. I found out that TaoSecurity.Blogspot.com was listed on the BlogShares market, so I registered myself as the owner. I found out Barry Irwin, owner of lair.moira.org, holds 4000 shares of this blog, and I as the Blog owner was given 1000. I found out about Barry's site when researching the nVidia driver issue mentioned earlier.

Last month Nvidia released FreeBSD drivers for their products. The README describes how to install and configure the drivers. Their forums offer advice for those having problems. Slashdot reported on this as well.

If anyone can recommend a dual-DVI card that works with FreeBSD, please email me at richard at taosecurity dot com.

Sunday, September 05, 2004

In the spirit of reporting on technology, I feel compelled to report on the latest gadget to enter my home -- the DCO7. What is it, you might ask? A miniature rocket? A new USB device? This, my friends, is the most amazing vacuum cleaner I have ever used. I call it the Macintosh of Vacuums due to its elegant engineering, thoughtful design, and superior performance.

The product is made by Dyson, a British company founded by inventor James Dyson. His story, also described by Forbes, is compelling. His recent TV ads show him describing how he thought other vacuums didn't do a good job. 5,127 prototypes later, he invented the Dyson. He shopped his bag-less design to the major vacuum manufacturers, who passed on his technology. Dyson claims the manufacturers make $500 million per year selling bags, so they were not interested in ending that income stream by selling a bagless vacuum.

Once the manufacturers realized how good Dyson's system worked, they introduced their own inferior products and even tried to copy his patent-protected technology outright. Dyson won a patent infringement claim against Hoover. (This is the sort of use for which patents are appropriate, unlike software patents.) According to a Dyson press release describing Hoover's patent infringement guilt, "Hoover later admitted that they 'regret that Hoover as a company did not take the product off the shelf, take it off Dyson; it would have lain on the shelf and not have been used,' (Hoover’s Vice-President, Europe 1995)."

I had no idea how vacuums work until I learned about the Dyson. His insights make me wonder why anyone bothers buying products using inferior technology. Dyson knew that vacuum bags are porous to allow air to exit the bag as it draws up dirt from the ground. Like most people, he thought the vacuum lost suction once the bag filled with dirt. He observed, however, that the pores needed to maintain suction quickly become blocked, even with a barely filled bag. Blocked pores reduce suction, not a filled bag. Within minutes of using a normal bag vacuum, you've effectively lost the suction needed to remove dirt.

Dyson's product does use two filters, but the first need only be washed every six months, and the second has a lifetime warranty. Dyson's site claims "Dyson Root Cyclone technology uses 100,000G of centrifugal force in the cyclones to filter dust and remove dirt from the airflow efficiently. Because there is nothing to obstruct the airflow, it doesn't clog and doesn't lose suction." I'd like to see the calculation for the "g" rating, but the no-clogging feature appears genuine. The proof of its superiority came when I ran it through a room I had just cleaned with my old vacuum. The Dyson filled its cannister with dirt and dog hair missed by the old unit. You can see the hair culprit pictured above.

If you'd like to read more about Dyson, check out the Amazon.com reviews.

Friday, September 03, 2004

Last week I posted a method to extract individual pcap files from a larger pcap file. Originally I thought it would be useful to have a tool which would extract all individual flows from a pcap file into pcap format. Note this is different from the capability offered by the excellent Tcpflow, which extracts the application data from all TCP flows.

I thought the tool Netdude might have this capability when I saw its libnetdude plugin Flow Demultiplexer. I was familiar with plugins for Netdude, the graphical interface. Flow Demultiplexer is not available within Netdude and must be invoked using libnetdude.

First, install Netdude. I used the FreeBSD net/netdude port. Next download and install the following from source code, in the order specified:

I think this Netdude Demux plugin is very useful, and I thank Christian for his help learning how to use it. If you'd like to see some of Netdude's other capabities, I feature Netdude in chapter 6 of The Tao of Network Security Monitoring.

Wednesday, September 01, 2004

"'IRC Hacks' is not a more recent version of Alex Charalabidis's 'The Book of IRC.' Published by No Starch Press in 2000, 'The Book of IRC' focuses on more introductory material, and thoroughly covers the issues facing most IRC users. Unlike the older No Starch book, 'IRC Hacks' devotes over 200 pages to bot development. In other words, the 'IRC Hacks' authors concentrate on more advanced ways to interact with IRC servers. If this is your primary interest, you will enjoy 'IRC Hacks.'"