"Farmer and Venema do for digital archaeology what Indiana Jones did for historical archaeology. 'Forensic Discovery' unearths hidden treasures in enlightening and entertaining ways, showing how a time-centric approach to computer forensics reveals even the cleverest intruder. I highly recommend reading this book."

In the chapter 7 (available online as a .pdf) Farmer and Venema mention the Veeco Nanotheater. Veeco makes products which can scan the surface of disks at nanotechnology scales. They show the image at right, and describe it as "residuals of overwritten information on the sides of magnetic disk tracks." This demonstrates the difficulty of truly "destroying" digital evidence. Forensic Discovery explains the problem this way:

"Although memory chips and magnetic disks are designed to store digital information, the underlying technology is analog. With analog storage of digital information, the value of a bit is a complex combination of past stored values. Memory chips have undocumented diagnostic modes that allow access to values smaller than a bit. With modified electronic circuitry, signals from disk read heads can reveal older data as modulations on the analog signal."

At 198 pages this book is a quick read, which explains how I was able to read and review it while writing a new book!

Sunday, January 30, 2005

Visitors to TaoSecurity.com may notice that the icon appearing in the Web browser address bar has changed from the FreeBSD daemon to the yin-yang "S" pictured at left. I created this icon using the following process, detailed by DHCPDump author Edwin Groothuis.

First I used xv to crop the TaoSecurity logo, set the image size at 16x16, and save the image (taosecurity.png) in png format.

"To be honest, this was one of the best books that I've read on network security. Others books often dive too deeply into technical discussions and fail to provide any relevance to network engineers/administrators working in a corporate environment. Budgets, deadlines, and flexibility are issues that we must all address. The Tao of Network Security Monitoring is presented in such a way that all of these are still relevant. One of the greatest virtues of this book is that is offers real-life technical examples, while backing them up with relevant case studies. Network security engineers, system administrations, and security management will find value in this book. It is a must-read for anyone interested in getting into the field, but would still be useful as a reference for the experienced expert."

Friday, January 28, 2005

Last month I found Meling Mudin's IDS blog, and learned of Jose Nazario's tool Flowgrep. Flowgrep is written in Python. It is similar to Ngrep, which I addressed in my first book. Ngrep is packet-oriented, meaning the strings for which Ngrep searches must all appear in a single packet. If you search for 'bejtlich', and 'bejt' is in one packet and 'lich' another, then Ngrep won't find anything.

Flowgrep, in contrast, is conversation-oriented. Flowgrep assembles TCP sessions, as well as pseudo-sessions for UDP and ICMP. Flowgrep will rebuild a conversation where 'bejt' is in one packet and 'lich' another, and report seeing 'bejtlich'.

Flowgrep relies on Mike Schiffman's Libnet and Mike Pomraning's Pynids, a Python wrapper for Rafal Wojtczuk's Libnids. Mike was kind enough to work with me over the last week to get Pynids operational on FreeBSD 5.3.

Here's how I ended up with a working Flowgrep implementation. First I ensured Python was on my system:

Now that Flowgrep is installed, let's see how it works. For demonstration purposes, I have telnet running on host janney on an odd port -- 47557 TCP. I tell Tcpdump to collect all traffic on janney involving that port. This gives us traffic for later analysis:

"So what," you might say. I can reconstruct streams with Tcpflow. True enough, but how did we end up with these two streams? These are the result of Flowgrep searching streams for the content 'hackerpassword'. How did the packets which built that stream look? Let's jump to the packets where the server presented "Password:" to the client:

By now everyone should appreciate just how powerful and useful Flowgrep can be. Even though every character of my 'hackerpassword' string appeared in separate packets, Flowgrep assembled the stream and logged the traffic that matched the filter I specified.

Flowgrep is not the only tool with this capability, since more robust intrusion detection systems offer similar features. However, this is the only stand-alone tool I know that offers rapid string matching on stream contents on arbitrary ports.

I chose to demonstrate telnet because I knew it would place virtually every character I typed into separate packets. The principle applies anywhere you are concerned that content of interest could be split between multiple packets. Furthermore, Flowgrep is designed to watch UDP and ICMP conversations as well.

Wednesday, January 26, 2005

In late 2003 I published Dynamic Duo Discuss Digital Risk. This was my light-hearted attempt to reinforce the distinction between a threat and a vulnerability. Specifically, a threat is a party with the capabilities and intentions to exploit a vulnerability in an asset. A vulnerability is a weakness in an asset that could lead to exploitation. An intruder (the threat) exploits a hole (the vulnerability) in Microsoft IIS to gain remote control of a Web server. In other words, threats exploit vulnerabilities.

This is a simple concept, yet it is frequently confused by security prophets like Bruce Schneier in Beyond Fear. Now SANS is making the same mistake in the latest Incident Handler's Diary. In a posting to announce work on the upcoming SANS Top 20 List, the Diary calls the new report the "SANS CRITICAL INTERNET THREATS 2005" and says:

"SANS Critical Internet Threats research is undertaken annually and provides the basis for the SANS 'Top 20' report. The 'Top 20' report describes the most serious internet security threats in detail, and provides the steps to identify and mitigate these threats."

So, are we going to read a ranking of identified Romanian intruders, followed by Russian organized crime, Filipino virus writers, and then Zimbabwean foreign intelligence services? Will mitigation include prosecution, incarceration, and the like? Probably not, as the announcement continues:

"The current 'Top 20' is broken into two complimentary yet distinct sections:- The 10 most critical vulnerabilities for Windows systems.- The 10 most critical vulnerabilities for UNIX and Linux systems."

So now we're talking about vulnerabilities. That's what last year's "Twenty Most Critical Internet Security Vulnerabilities" addressed. The announcement concludes:

"The 2005 Top 20 will once again create the experts' consensus on threats - the result of a process that brings together security experts, leaders, researchers and visionaries... In addition to the Windows and UNIX vulnerabilities, this year's research will also focus on the 10 most severe vulnerabilities in the Cisco platforms."

I sincerely hope at least one expert will clue in the announcement-writer concerning the difference between a threat and a vulnerability. Words matter!

"Vulnerability is not synonymous with threat. A vulnerability is a weakness in a system that may be exploited. A threat requires an actor with the motivation, resources, and intent to exploit a vulnerability."

I read in the latest SANS NewsBites that UC San Diego suffered another intrusion in November 2004, jeopardizing the personal information of about 3,500 people who had taken courses at UCSD Extension. This incident follows a well-publicized intrusion in April 2004 putting at risk personal data on 380,000 people. In both cases UC appears to have caught unstructured threats, as each intruder used the systems as warez depositories for pirated movies and music.

"Officials said it took two months to notify those who were affected because officials first needed to determine the extent of the breach."

This is exactly why I promote network security monitoring as a means to rapidly scope the extent of intrusions. First, generating indicators and warnings in the form of alert data (usually from IDSs) and statistical data gives security professionals a good chance of identifying an intrusion as it happens or shortly thereafter. I would bet the University saw an increase in traffic when its systems began hosting warez. Second, collecting session and full content data would give the University a chance to inspect data not tied to IDS alerts. Third, all of this information could potentially describe the intruder's activities, and validate if he or she stole sensitive personal information.

Tuesday, January 25, 2005

Snort 2.3.0 has been released. There appear to be only bug fixes and documentation updates since RC2 arrived last month. At the moment the online manual still shows 2.2.0, but the .pdf packaged with the tarball is the 2.3.0 version. I have not seen any problems with RC2, so I believe the upgrade process should be smooth.

I will get to work on an updated Sguil installation guide shortly, as I've accumulated enough minor fixes to warrant an update.

"Frost & Sullivan presents this Award to the company that demonstrated excellence in all operations. Sourcefire is recognized for its numerous achievements including unique product strategy, important technological developments, and significant gain in market share."

Those of you running the FreeBSD 4.x tree will be happy to know that FreeBSD 4.11 is now available. The release announcement calls 4.11 "the latest release of the FreeBSD Legacy development branch" and states the following:

"FreeBSD 4.11 will become the first 'Errata Branch.' In addition to Security fixes other well-tested fixes to basic functionality will be committed to the RELENG_4_11 branch after the release... This is expected to be the last release from the RELENG_4 branch."

I am no longer running any 4.x systems and have migrated everything to 5.3.

Last year when US Senator Ted Kennedy was detained for being on a no-fly list, I discussed his plight in relation to intrusion detection system "false positives." If an IDS is operating correctly, every alert it sees is the result of an action it was programmed to take. In other words, when a functioning IDS sees "cmd.exe", it reports seeing "cmd.exe".

It doesn't matter if the appearance of "cmd.exe" on the wire is not part of an actual intrusion; a rule to alert on "cmd.exe" does not cause "false positives" if the IDS reports seeing "cmd.exe". A real false positive involves the IDS reporting "cmd.exe" when no such content passed on the wire. Therefore, there are no such things as false positives. Blame the signature writer or IDS developer, not the IDS.

"Illinois State Police Trooper Daniel Gillette stopped Roy Caballes for driving 71 miles per hour in a zone with a posted speed limit of 65 miles per hour. Trooper Craig Graham of the Drug Interdiction Team heard on the radio that Trooper Gillette was making a traffic stop. Although Gillette requested no aid, Graham decided to come to the scene to conduct a dog sniff.

Gillette informed Caballes that he was speeding and asked for the usual documents–driver’s license, car registration, and proof of insurance. Caballes promptly provided the requested documents but refused to consent to a search of his vehicle. After calling his dispatcher to check on the validity of Caballes’ license and for outstanding warrants, Gillette returned to his vehicle to write Caballes a warning ticket. Interrupted by a radio call on an unrelated matter, Gillette was still writing the ticket when Trooper Graham arrived with his drug-detection dog.

Graham walked the dog around the car, the dog alerted at Caballes’ trunk, and, after opening the trunk, the troopers found marijuana."

Justice Stevens' majority opinion held that "the dog sniff was performed on the exterior of respondent's car while he was lawfully seized for a traffic violation. Any intrusion on respondent's privacy expectations does not rise to the level of a constitutionally cognizable infringement... A dog sniff conducted during a concededly lawful traffic stop that reveals no information other than the location of a substance that no individual has any right to possess does not violate the Fourth Amendment." In other words, it's ok for police to use dogs to inspect cars for drugs during traffic violation stops (or at other times), even if there is no suspicion of drugs involved.

"I would hold that using the dog for the purposes of determining the presence of marijuana in the car’s trunk was a search unauthorized as an incident of the speeding stop and unjustified on any other ground...

The infallible dog, however, is a creature of legal fiction... [T]heir supposed infallibility is belied by judicial opinions describing well-trained animals sniffing and alerting with less than perfect accuracy, whether owing to errors by their handlers, the limitations of the dogs themselves, or even the pervasive contamination of currency by cocaine...

In practical terms, the evidence is clear that the dog that alerts hundreds of times will be wrong dozens of times.

Once the dog’s fallibility is recognized, however... the sniff alert does not necessarily signal hidden contraband, and opening the container or enclosed space whose emanations the dog has sensed will not necessarily reveal contraband or any other evidence of crime."

Justice Ginsberg expresses the second reason for my disagreement. Returning to her dissent, we see that beyond a Fourth Amendment violation, there are other problems with allowing canine searches prone to false positives:

"A drug-detection dog is an intimidating animal... Injecting such an animal into a routine traffic stop changes the character of the encounter between the police and the motorist. The stop becomes broader, more adversarial, and (in at least some cases) longer. Caballes –- who, as far as Troopers Gillette and Graham knew, was guilty solely of driving six miles per hour over the speed limit -– was exposed to the embarrassment and intimidation of being investigated, on a public thoroughfare, for drugs...

Under today’s decision, every traffic stop could become an occasion to call in the dogs, to the distress and embarrassment of the law-abiding population...

Today’s decision... clears the way for suspicionless, dog-accompanied drug sweeps of parked cars along sidewalks and in parking lots... Nor would motorists have constitutional grounds for complaint should police with dogs, stationed at long traffic lights, circle cars waiting for the red signal to turn green."

"We have held that any interest in possessing contraband cannot be deemed 'legitimate,' and thus, governmental conduct that only reveals the possession of contraband 'compromises no legitimate privacy interest.'"

Now, what if the definition of contraband is extended beyond illegal drugs? How about music or movies in digital form, or pirated software? Is the Court opening the door to knock down privacy rights, since means to discover contraband do not infringe Fourth Amendment rights? The Court continues:

"The legitimate expectation that information about perfectly lawful activity will remain private is categorically distinguishable from respondent’s hopes or expectations concerning the nondetection of contraband in the trunk of his car."

The Court also brushes aside the false positive concerns:

"Although respondent argues that the error rates, particularly the existence of false positives, call into question the premise that drug-detection dogs alert only to contraband, the record contains no evidence or findings that support his argument."

I find this ruling very disturbing. I expect to see canine units used in increasing numbers in the coming months, where false positives will continue to plague innocent people. For example, yesterday National Public Radio reported that a man carrying cash to close on his house purchase was arrested when a dog alerted to supposed traces of illegal drugs on the money. Apparently traces of drugs on US currency is not an urban legend!

Since I maintain multiple Dell PowerEdge 750 servers with Hyper-Threading Technology (HTT), I found Scott's comments on gains from HTT to be interesting. It seems that HTT will only be useful once the new ULE scheduler is equipped to make use of HTT and ULE replaces the 4BSD scheduler. Scott says:

"The other design goal of ULE was to have it map out and understand the CPU topology and make good scheduling choices for features like Hyper-Threading. Unfortunately, to my knowledge this work is not yet complete.

Scott Long: As of right now, very little. The scheduler really needs to be aware of Hyper-Threading and schedule threads and processes appropriately so that the caches and TLBs [Transaction Look-aside Buffers] can be shared and not get thrashed. The ULE scheduler will fill this role in the future, but it's not there yet."

Scott also discusses how the network stack will benefit from the removal of the GIANT lock, and how the Pf firewall imported from OpenBSD already runs without GIANT, unlike IPFW. This means Pf is believed to be faster on FreeBSD than IPFW.

Friday, January 21, 2005

"Prediction: This is the year you will see application level attacks mature and proliferate. As hackers focus more on applications, Oracle may start competing with Microsoft as the vendor delivering software with the most critical vulnerabilities."

I hear this focus on "applications" constantly, but this is old news. First look at the problem by separating the operating system (OS) kernel from the OS applications. If we look at vulnerabilities in this respect, "applications" have been under attack for decades. Perusing the CERT Advisories list (transitioned to the US-CERT's Technical Cyber Security Alerts in 2004), we see warnings about application vulnerabilities since 1988. For example, in December 1998 we have CA-1988-01: ftpd Vulnerability.

Wednesday, January 19, 2005

I just read an article titled Microsoft Turns to External Patch Testers. The goal is "is to provide a small number of dedicated external evaluation teams with access to the [beta] patches to test for application compatibility, stability and reliability in simulated production environments." This article cites a Microsoft rep saying "'This is a very controlled program... We have only invited participants with whom we have a close relationship, where we are sure that confidentiality will be maintained.'"

This comment makes me question if Microsoft understands what it is doing: Stephen Toulouse, program manager at the Microsoft Security Response Center, "made it clear that the outside testers had no access to information on the vulnerability addressed by the patch. 'They're evaluating the updates in a private, closed-lab environment. They are required to sign an NDA [nondisclosure agreement] and they don't ever know what the patch is correcting. They're simply simulating a real-world deployment in a lab environment and looking for potential problems,' Toulouse said."

At the very least, patch recipients will be able to see what files were changed on the target system if they use file integrity verification software. The testers may not know exactly what problem is being corrected, but any competent tester will know that XYZ.dll and ABC.dll have been replaced by Microsoft's beta versions.

Any program involving greater testing of patches is probably a good idea. However, Microsoft should have realistic expectations concerning the sharing of information on replacement of .dlls and other Windows components.

Tuesday, January 18, 2005

Last night I started working on my next book: Extrusion Detection: Security Monitoring for Internal Intrusions. The goal of this book is to help security architects and engineers control and instrument their networks, and help analysts investigate security events.

Extrusion Detection is a sequel to my first book, The Tao of Network Security Monitoring: Beyond Intrusion Detection.

Extrusion Detection explains how to engineer an organization's internal network to control and detect intruders launching client-side attacks. Client-side attacks are more insidious than server-side attacks, because the intruder targets a vulnerable application anywhere inside a potentially hardened internal network. A powerful means to detect the compromise of internal systems is to watch for outbound connections from the victim to systems on the Internet operated by the intruder. Here we see the significance of the word "extrusion" in the book's title. In addition to watching connections inbound from the Internet, we watch for suspicious activity exiting the protected network.

Readers will learn theory, techniques, and tools to implement network security monitoring (NSM) for internal intrusions. I have already received several case studies from LURHQ and I have contacted an expert on p2p networks who plans to write a chapter. I am interested in hearing from any blog readers who might want to contribute a case study, section, chapter, on appendix on one or more of the following subjects:

- Interpreting Microsoft Server Message Block (SMB) (port 139, 445 TCP) protocols- Microsoft's Network Access Protection (NAP)- Cisco's Network Admission Control (NAC) technologies.- 802.1x- VLANs and VLAN access control lists- Cisco Network Access Module and similar means to collect traffic on network hardware- Using FPGAs, network processors, or other non-libpcap methods to capture network traffic in high bandwidth environments- Using proxies to inspect and carry traffic from internal systems to the Internet -- the more exotic, the better- Any case studies involving compromise of internal systems, such as via VPN to partner networks, attaching rogue laptops, opening malicious email or visiting evil Web sites- Anything else you think would be cool to discuss in a book on controlling, detecting, and responding to internal threats -- as long as it doesn't appear in other books!

If you have an idea you'd like to discuss, please email taosecurity at gmail dot com. You will receive full credit for anything you submit that makes it in some form into the final book, even if I have to rewrite some or all of it to meet publishing guidelines. Thank you!

I'd like to thank higB of secureme.blogspot.com for reminding me to register for ShmooCon 2005. This is "an all-new, annual East coast hacker convention hell-bent on offering an interesting and new atmosphere for demonstrating technology exploitation, inventive software & hardware solutions, as well as open discussion of critical information security issues." The program looks great, and you can't beat the $199 price tag (pay before 1 Feb) for a 3 day con (Fri 4 Feb - Sun 6 Feb). If you plan to join me for ShmooCon in DC, reply to this post.

roesch: when stream4 is doing it's thing it queues the tcp segments as they come in

roesch: in stream4 we actually queue the entire packet and keep a pointer to the payload to management reassembly

roesch: "flushing" is what happens when we accumulate a certain number of bytes on a stream that's in excess of the "flush point" for that stream

roesch: when we flush, we reassemble the segments into a pseudopacket and run it back thru the preprocessor stack and detection engine

roesch: if there's a detect, we ask stream4 to log all the queued *packets* on the stream

roesch: the first packet gets identified as the attack packet and the rest of them are tagged off of that event

roesch: so if you're detecting on "foobar" and it's been spread across three packets as "fo" "ob" "ar" then you're going to get one even packet and two tagged packets

roesch: this was in 2.1.x or maybe 2.2

roesch: the idea is that we don't want to log the pseudopacket since it's pretty much "inadmissable" from a evidence standpoint

qru: roesch: Yeah, I always hated that thing. What do you do w/the pseudo packet then?

roesch: we chuck it

roesch: as an analyst you'll need to have something that can reassemble the segments and present them to you

roesch: which in theory is pretty easy but in implementation is a pain if you've got an evasive attacker

This explanation is important for several reasons. First, it's important to understand how your IDS works. If you don't understand how it works, you're less likely to trust the alert data it generates. If you don't trust IDS alerts, why are you collecting them?

Second, this stream implementation represents a trade-off between capability and performance. Sensors are not built with unlimited ability to capture and reassemble traffic. Anything you can do to make the traffic stream cleaner for your sensor, like packet scrubbing, helps.

Third, Marty demonstrates that the pseudopacket that Snort presents to an analyst may not be an actual packet that crossed the wire. If an analyst wants to see exactly what passed by the sensor, she must turn to full content data collected independently of the alert data generation with Snort.

Monday, January 17, 2005

"FreeBSD 5.4 release engineering will start in March, and FreeBSD 5.5 release engineering will likely start in June. These releases are expected to be more conservative than previous 5.x releases and will follow the same philosophy as previous -STABLE branches of fixing bugs and adding incremental improvements while maintaining API stability.

For the 6-CURRENT development branch as well as all future development and stable branches, we are planning to move to a schedule with fixed timelines that move away from the uncertainty and wild schedule fluctuations of the previous 5.x releases. This means that major branches will happen at 18 month intervals, and releases from those branches will happen at 4 month intervals. There will also be a dedicated period of testing and bug fixing at the beginning of each branch before the first release is cut from that branch. With the shorter and more defined release schedules, we hope to lessen the problem of needed features not reaching users in a reasonable time, as happened too often with 5.x. This is a significant change in our strategy, and we look forward to realizing the benefits of it. This will kick off with the RELENG_6 branch happing in June of 2005, followed by the 6.0 release in August of 2005.

Also on the roadmap is a plan to combine the live-iso disk2 and the install distributions of disk1 into a single disk which can be used for both installation and for recovery. 3rd party packages that currently reside on disc1 will be moved to a disk2 that will be dedicated to these packages. This move will allow us to deal with the ever growing size of packages and also provide more flexibility to vendors that wish to add their own packages to the releases. It also opens the door to more advanced installers being put in place of sysinstall."

"FreeBSD 5.3 is the first release to include PF. It went out okay, but some bugs were discovered too late to make it on the CD. It is recommend to update `src/sys/contrib/pf' to RELENG_5. The specific issues addressed are:

"OpenOffice.org 2.0 is planned to be released in March 2005. Currently developer snapshot versions are available. Now one of the developer version has been ported, and committed to ports tree (/usr/ports/editors/openoffice-2.0-devel)."

1.1.4 has been ported and committed to ports tree. Packages are available.

Invoking OpenOffice.org from command line has been changed. Now `.org' is mandatory. e.g. openoffice-1.1.4 -> openoffice.org-1.1.4. Since the name of the software is OpenOffice.org, not OpenOffice. We are also considering the name of the ports (/usr/ports/editors/openoffice-2.0-devel -> openoffice.org2-devel etc)."

The thread starts with the usual defense of "security through obscurity" one might expect:

"As many of you know Matt Blaze a professor at Pennsylvania University has published an article that reveals proprietary techniques of safe penetration. It was featured on well known hacker website recently, and it came to our attention on Saturday. It includes information normally reserved to the trade, for good reasons that need not be discussed here. The article is available to the general public without any restrictions whatsoever. We as professionals in the security field are outraged and concerned with the damage that the spread of this sensitive information will cause to security and to our profession."

Here is an educated response to this foolish opinion:

"I think you meant to say: We have to nip it in the bud or soon there will be no __APPEARANCE_OF__ security left. This is so silly on so many levels. You sell a product that has known deficiencies so that you can break in when you need to. Then you act like it's a big deal when someone talks about it! On top of that you act like it's a matter of national security when, in fact, it changes nothing.

It does not take a brain surgeon to figure out that anyone can buy a safe, disassemble it and figure out it's weaknesses. The fact that every single copy of model X is built the same way is planned insecurity. Now THAT's a crime. That they are sold as secure when they are not is a crime.

If you want to get Blaze to protect your job, that's understandable. To villify him for openly discussing what is known within the industry to be common shortcomings is shear hypocrisy.

I'm still waiting for SCHLAGE to notify folks that it's recalling their defective entry locks. Wait, they can't so that without disclosing that they are insecure, so only the locksmiths and burglers know."

One response shows that lock vendors are acting exactly like software vendors not held accountable for producing flawed software:

"The fact of the matter is the lock manufactuers, Ingersol Rand and Black and Decker being the two largest ones here in the states, dont want to spend a dollar or two more on their locks to improve them. They would rather put out pot metal junk that offers only a since of security. If the public in general only knew what I know, that being the fact that Kwikset and Titan locks are junk, the famous Schlage 'Maximam Security Deadbolt' is pot metal, Yale is no longer up to par, Sentry safes are worthless."

For a 1991 document on picking locks, check out the Guide to Lock Picking, hosted at a real "hacker site" -- MIT.

"By August 5th the agents already had a good idea what was going on, when Ethics made a fateful mistake. The hacker asked the Secret Service informant for a proxy server -- a host that would pass through Web connections, making them harder to trace. The informant was happy to oblige. The proxy he provided, of course, was a Secret Service machine specially configured for monitoring, and agents watched as the hacker surfed to "My T-Mobile," and entered a username and password belonging to Peter Cavicchia, a Secret Service cyber crime agent in New York.

Cavicchia was the agent who last year spearheaded the investigation of Jason Smathers, a former AOL employee accused of stealing 92 million customer e-mail addresses from the company to sell to a spammer. The agent was also an adopter of mobile technology, and he did a lot of work through his T-Mobile Sidekick -- an all-in-one cellphone, camera, digital organizer and e-mail terminal. The Sidekick uses T-Mobile servers for e-mail and file storage, and the stolen documents had all been lifted from Cavicchia's T-Mobile account, according to the affidavit. (Cavicchia didn't respond to an e-mail query from SecurityFocus Tuesday.)

By that time the Secret Service already had a line on Ethic's true identity. Agents had the hacker's ICQ number, which he'd used to chat with the informant. A Web search on the number turned up a 2001 resume for the then-teenaged Jacobsen, who'd been looking for a job in computer security. The e-mail address was listed as ethics@netzero.net.

The trick with the proxy honeypot provided more proof of the hacker's identity: the server's logs showed that Ethics had connected from an IP address belonging to the Residence Inn Hotel in Buffalo, New York. When the Secret Service checked the Shadowcrew logs through a backdoor set up for their use -- presumably by the informant -- they found that Ethics had logged in from the same address. A phone call to the hotel confirmed that Nicolas Jacobsen was a guest."

I strongly recommend reading the whole article for context, but the four italicized sections yield some interesting lessons:

"'Gray Hat Hacking' (GHH) is positioned as a next-generation book for so-called ethical hackers, moving beyond the tool-centric discussions of books like 'Hacking Exposed.' The authors leave their definition of 'gray hat' unresolved until ch 3, where they claim that a 'white hat' is a person who 'uncovers a vulnerability and exploits it with authorization;' a 'black hat' is one who 'uncovers a vulnerability and illegally exploits it and/or tells others how to;' and a 'gray hat' is one who 'uncovers a vulnerability, does not illegally exploit it or tell others how to do it, but works with the vendor.' I disagree and prefer SearchSecurity.com's definitions, where white hats find vulnerabilities and tell vendors without providing public exploit code; black hats find vulnerabilities, code exploits, and maliciously attack victims; and gray hats find vulnerabilities, publish exploits, but do not illegally use them. According to these more common definitions, the book should have been called 'White Hat Hacking.' I doubt it would sell as well with that title!"

My review echoes most of Patrick Mueller's review in Information Security magazine, except for his comment that "The authors did, however, deliver on their ethical obligations to provide accurate countermeasures to the attack methods they describe--a true value to readers." This makes no sense to me. Defense gets a short 10 page chapter, which should have been dropped and replaced by a reference to any of the extensive tomes written about network defense.

I wrote about the Metasploit Framework in April 2004. The Metasploit Framework is an advanced open-source platform for developing, testing, and using exploit code. This week they released version 2.3, which offers 3 user interfaces, 46 exploits, and 68 payloads. One of the more interesting additions is the Meterpreter (.pdf). This is a replacement for calling cmd.exe on Windows after an exploit succeeds. Windows support is currently offered and UNIX (to replace calling /bin/sh) is planned. The Meterpreter is extensible, so you can add features once you gain control of the target. You can browse the exploits and payloads using their Web-based interface.

Tuesday, January 11, 2005

I think Sun and Apple are doing real innovation in the commercial software and hardware spaces, unlike many of their competitors. I already own an old Sun Ultra 30, and I plan to buy several Sun Ray thin clients at work. I've been looking for an excuse to get an Apple system of some kind ever since Mac OS X was released. Since I run FreeBSD on my Thinkpad a20p laptop, I don't need another desktop or laptop system. I've also vowed to never buy another tower form factor PC again. It's either small form factor, laptops, or rackmounts from here on. That left buying an Apple Xserve, which was more horsepower than I could justify buying.
Today, Apple released the Mac mini, pictured at right and in Steve Jobs' hands above. This looks like a great little box. Take a look at the back side below. Although it only has one built-in Ethernet port, those two USB 2.0 ports say "additional NICs" to me, assuming I can get a USB-based NIC to work with OS X.
I don't plan to use this Mac as a desktop. Rather, I'll deploy it on my wire shelving in my lab and access it remotely. I'm wondering if I can boot this baby without a monitor attached. This MacFixIt tip suggests connecting an Apple display adapter to fool the Mac into booting without a display attached. It looks like Fink provides VNC packages, so I can access the whole Mac OS X desktop remotely.

If I buy a Mac mini, I'll report how I use it. Maybe once I finish this new book?

This past weekend I decided to remove the firewall/gateway from the picture. When the router is deployed like this, it's called a "router on a stick."

cable modem - cisco router - cisco switch - clients

In that late 2003 story I explained how I set up 802.1q on the FreeBSD system to pass traffic between VLANs on the Cisco switch. Without that FreeBSD in place, I needed to configure my Cisco 2651XM router to exchange inter-VLAN traffic.

Luckily this Cisco document came to the rescue. The process was fairly simple. I administered the router via console cable, so none of my changes resulted in being locked out of one of the interfaces. I don't recommend letting anyone be able to connect to a Cisco router interface, in any case. (For a great presentation on router security, check out this .pdf of a presentation by Sean Convery and Matthew Franz.)

First I removed the IP address previously assigned to the interface facing the switch:

int fa0/1
no ip address 192.168.40.2 255.255.255.0

Next I created an IP address to handle VLAN 10, which is a 10.10.10.0/24 network. Note the use of '0/1.1' instead of '0/1':

Monday, January 10, 2005

Today's Slashdot features Security Holes Draw Linux Developers' Ire. Essentially the GRSecurity Linux security patch developers are upset about the lack of response to their discovery of Linux kernel vulnerabilities. This article by Brad Spengler features the 31337 technique used to find the holes:

"Using 'advanced static analysis':

cd drivers; grep copy_from_user -r ./* |grep -v sizeof

I discovered 4 exploitable vulnerabilities in a matter of 15 minutes. More vulnerabilities were found in 2.6 than in 2.4. It's a pretty sad state of affairs for Linux security when someone can find 4 exploitable vulnerabilities in a matter of minutes."

I am disappointed that this is the case. I am not a kernel developer so I won't comment on the difficulties associated with removing these sorts of vulnerabilities. However, some of those that are kernel developers do not seem to be heeding the warnings in books like Building Secure Software, which I reviewed last week. This is an unfortunate indictment of part of our software engineering community, especially when Linux is being deployed in ever more important places.

More disturbing for me was this email from kernel developer Ted Ts'o in the linux-kernel mailing list:

"Not all 2.6.x kernels will be good; but if we do releases every 1 or 2 weeks, some of them *will* be good."

I could be accused of taking this out of context, but to me this sort of thinking is not what I want to hear associated with a kernel called stable. This is exactly the point of the Slashdot commentator who brought this email to my attention. I saw the same mentality in The Hacker Ethic, where ESR criticizes the BSD development model:

BSD is "carefully coordinated... by a relatively small, tightly knit group of people" [in comparison with Linux, where] quality was maintained not by rigid standards or autocracy but by the naively simple strategy of releasing every week and getting feedback."

I prefer the BSD model, where users and administrators know that CURRENT is bleeding edge and STABLE is more or less that -- "stable." Those that need even more "stability" can track a security release, where the primary changes are security fixes and critical bux fixes.

I think if we continue to see this sort of development process, Linux vendors will have no choice but to heavily patch the "vanilla" Linux kernel and provide that patched version in their distros. They of course can do that, but I believe such patching contributes to the fragmentation of the Linux community. That increases the level of difficulty of writing projects like l7-filter, which itself requires patches for the Linux kernel to operate.

Sunday, January 09, 2005

Today I moved my local name resolution duties from a FreeBSD 4.x system to a FreeBSD 5.3 system. I found the FreeBSD Handbook sparse reading, but this article gave a few more pointers. Here's what I ended up doing.

I altered the serial numbers by adding '01' to the end to allow 99 edits per day. (Using the default '20050109' yields one edit per day, if you want your serial number to be related to the day you change it. This is totally optional but I find it helpful.)

I then added that information plus a control statement to /var/named/etc/namedb/named.conf:

controls { inet 127.0.0.1 allow { localhost; } keys { rndc-key; };};

key "rndc-key" { algorithm hmac-md5; secret "OBSCURED";};

I found I had a file /etc/rndc.conf that had a matching key:

options { default-server localhost; default-key "rndc-key";};

server localhost { key "rndc-key";};

key "rndc-key" { algorithm hmac-md5; secret "OBSCURED";};

With this infrastructure in place, I essentially copied my old zone configuration files into /var/named/etc/namedb. I made sure to update all of the serial numbers on files with changes. Once done I used the new rc scripts to restart named:

Saturday, January 08, 2005

When I worked incident response for Foundstone, my boss Kevin Mandia taught me about "investigative leads." This is a Bureau/law enforcement term for items which are recognized as important in a report but require additional scrutiny. I have several network security monitoring investigative leads which I have not yet had time to follow. I list them here in the event one or more of my readers have checked them out:

In November Dave Aitel of Immunity, Inc. posted an announcement of his company's CANVAS Reference Implementation (CRI). CANVAS is a penetration testing toolkit consisting of private exploits written by Immunity, Inc. The CRI is a subset of CANVAS, available for free under NDA, aimed at those wishing to test IDS and layer 7 firewalls (aka "IPS"). I plan to try this out soon, but don't expect public results due to the NDA.

There's an extended focus-ids thread discussing the need for packet capture and the problems of doing so in high bandwidth environments. Anyone who has seen my Amazon.com Wish List will notice I am researching hardware-based approaches to the problem, like network processors, FPGAs, and microcontrollers.

A friend pointed me to l7-filter, an "Application Layer Packet Classifier for Linux." This looks really cool. Along with the upcoming release of Snort 2.3 with integrated inline capabilities, I'm being forced to deploy one or more Linux boxes to try these features. If l7-filter is able to profile traffic running on arbitrary ports, it will give open-source-bound NSM analysts a powerful new capability.

If you have trouble justifying your monitoring duties, you'll face less resistance if you share Wanted: Chief Espionage Officer with the doubting parties. I have yet to read all of this article, but it's a detailed look at (illegal) corporate intelligence gathering.

Regarding the third point -- would anyone care to suggest a Linux distro for my snort-inline and l7-filter projects? I'm going to be running on minimal hardware without X. I'm leaning toward Debian or Slackware and away from Fedora Core, Mandrake, and Gentoo. I'd like a Linux distro that uses the kernel.org kernel as-is, or as much as possible. Is there such a thing? Coming from BSD-land, I'm not current on the Linux scene. Thank you.

Today is the 2nd birthday of the TaoSecurity blog. Thank you to all of my readers. The primary purpose of this blog is to be a "hard drive for my brain." In other words, I mainly record how I accomplish certain tasks, or I put context around security events and related developments. I hope you find the content useful and relevant.

Friday, January 07, 2005

Several people have asked me to comment on George McGarry's Benchmark Comparison of NetBSD 2.0 and FreeBSD 5.3. My initial reaction to this article is disappointment. I am not upset because the author says his "results indicate that NetBSD has surpassed FreeBSD in performance on nearly every benchmark and is poised to grab the title of the best operating system for the server environment." I am disappointed that the author has decided to use his "results" in a divisive manner. Rather than seek to learn from each BSD project and potentially compete better with Linux or Windows, George decides to drive a wedge between the NetBSD and FreeBSD communities.

While I find Hubert's blog to be a good independent source of NetBSD information, I question why he is the delivery man. Why can't George post for himself, and answer replies publicly? If you're going to make strong claims, the author should be willing to stand up for his beliefs in person. None of these postings have been followed by responses from George or Hubert.

Regarding the "results" themselves, I found three responses interesting. First, Stheg Olloydson points out that George's test results do not support his conclusions. Stheg uses excerpts from George's paper to make his case.

"Regarding SMP or not -- the path the FreeBSD Project has taken (and this choice was before I was really all that involved, to be honest) was a re-architecture of the kernel to improve performance, scalability, and structure via a movement to a parallelizable, preemptible, threaded kernel. I think this is the right architecture to move to, as it not only improves performance and scalability, but it also closes a lot of existing race conditions in the kernel that only became more exposed as threading and SMP became more predominant. This has had a lot of performance benefits, but comes with initial costs that aren't all immediately offset by initial benefits. Now that this model is largely adopted, we'll see a nice increase in benefits over time -- i.e., it was an investment."

The third response comes from Kris Kennaway, who mentions the controversial nature of the fefe.de benchmarks. Kris also says:

"There's a leap from 'NetBSD performs better in microbenchmarks' to 'NetBSD is a better-performing server' (macrobenchmarks often do not reflect the same performance characteristics as microbenchmarks, although of course they are influenced by them)."

Slashdot is covering this story now, and there are a few helpful comments in the responses.

I personally use whatever operating system best suits the project at hand. For example, I prefer to use OpenBSD for firewalls and FreeBSD for general purpose servers. I have plans for a NetBSD system to become a terminal server, since it supports an obscure piece of hardware on that box and the machine offers only 32 MB RAM.

In conclusion, I would be much happier to see performance comparisons, especially between BSD versions, used to improve the collective performance of each variant. Instead, we see trolls here and here using these "results" to justify personal vendettas. I hope to see personal replies from George in the near future, or I will continue to be suspicious of his motives.

Tuesday, January 04, 2005

"'Building Secure Software' (BSS) is an excellent book. I can't believe it was published in the fall of 2001, and I've only gotten to it now. Negative reviewers should remember that a single book can't address every security topic under the sun. BSS is the first of several titles by authors Viega and McGraw; those looking for additional details can peruse their later books."

Monday, January 03, 2005

Although the FreeBSD Handbook offers a VPN over IPSec section, it doesn't describe the scenario I face when deploying network security monitoring sensors. That document also references commands that no longer exist in FreeBSD 5.3, like 'gifconfig.' My architecture looks like this (all IP addresses are obfuscated):

I need to encrypt communications from the sensor to the monitoring backend. This can involve multiple individual sockets. I don't like to use OpenSSH port forwarding or Stunnel because I must set up a separate port forwarding or tunnel session for each channel. I would much rather use IPSec, since that can carry any communications between the sensor and the backend.

Complicating matters, I need to communicate between a sensor with a public management IP and a backend with an internal private IP address. That backend internal private IP address is transformed using NAT on the VPN concentrator and NAT gateway. All boxes in this scenario run FreeBSD 5.3 RELEASE.

One answer to this problem, and the approach I use, is to create a virtual tunnel from the sensor to the gateway, through which traffic to and from the backend can pass. I will use the gif facility in FreeBSD. This will create an IP-in-IP tunnel, which I will then wrap inside IPSec ESP.

The monitoring backend will communicate with 10.4.12.10 when it needs to talk to the sensor. The sensor will communicate with 192.168.1.10 when it needs to talk to the backend. The gateway will take care of connecting the two endpoints.

The first step is to recompile the kernels of the sensor and gateway to suit their roles. Here is what I add to the sensor's kernel config file before recompiling the kernel:

options FAST_IPSECdevice crypto

Here is what I add to the gateway's kernel config file before recompiling the kernel. The last two lines are completely optional, but the IPFIREWALL_DEFAULT_TO_ACCEPT means I don't need to add rules to permit later traffic:

The gifconfig statement defines the public IPs used as the tunnel endpoints. The ifconfig_gif0 statement sets up the tunnel, with 10.4.12.10 as the local endpoint and 10.4.12.1 as the remote endpoint. The static_routes and route_gif0_0 statements tell the sensor how to reach the backend network.

First I tell the gateway to act as a gateway, and I enable the firewall. I also enable NAT, with em0 being the Internet-facing interface with the external public IP address. I have commented out a natd_flags line showing how to do port forwarding. For example, a connection to port 8080 TCP on the gateway's external IP would be sent to the internal system 192.168.1.10.

Next I set up the gif interfaces for this end of the tunnel. They are mirror images of the entries for the sensor.

Note that in both cases I have commented out ipsec_enable="YES" for the moment. It is important to get a working IPSec configuration before one enables ipsec_enable="YES" in /etc/rc.conf. If you reboot a system with ipsec_enable="YES" uncommented, and your /etc/ipsec.conf configuration file is faulty, the system will not completely boot up. You will end up needing physical access to the system or remote serial access to fix the problem.

We have done enough at this point to try sending traffic without using IPSec, but with the gif tunnel. To create the gif interface manually on the sensor, use syntax like this:

Now that the traffic is being passed appropriately, we need to apply IPSec ESP to it. We create the following /etc/ipsec.conf file on the sensor. All of the spdadd statements should occupy a single unbroken line:

The first two lines flush IPSec Security Association Database (SAD) entries and Security Policy Database (SPD) entries. The first spdadd statement says traffic sent out from 10.4.12.10 to 10.4.12.1 should go via the IPSec tunnel from 18.235.153.37 to 78.172.25.27.

The second spdadd statement says traffic sent in from 10.4.12.1 to 10.4.12.10 should go via the IPSec tunnel from 78.172.25.27 to 18.235.153.37. These two entries are enough to protect traffic sent between the sensor and gateway.

The third spdadd statement says traffic sent out from 10.4.12.10 to the 192.168.1.0/24 network should go via the IPSec tunnel from 18.235.153.37 to 78.172.25.27.

The fourth spdadd statement says traffic sent in from the 192.168.1.0/24 network to 10.4.12.10 should go via the IPSec tunnel from 78.172.25.27 to 18.235.153.37. These two entries protect traffic sent between the sensor and the backend.

The /etc/ipsec.conf file on the gateway is a mirror image of the sensor's /etc/ipsec.conf:

Now that the /etc/ipsec.conf files are ready, the last step is to install a program to manage key negotiations. We'll use Racoon, which can be installed via the security/racoon port. First, I made this change to the /usr/local/etc/racoon/racoon.conf file on the sensor to tell Racoon where to listen for key exchange packets:

listen{ isakmp 18.235.153.37 [500];}

On the gateway, the modification looks like this:

listen{ isakmp 78.172.25.27 [500];}

Now, both public IP endpoints are listening on port 500 UDP for key exchange traffic.

Next I enabled a secret key. On the sensor, /usr/local/etc/racoon/psk.txt looks like this, which says use the specified key with the gateway 78.172.25.27:

78.172.25.27 thisisabadsecret

On the gatewat, /usr/local/etc/racoon/psk.txt looks like this, which says use the specified key with the sensor 18.235.153.37:

18.235.153.37 thisisabadsecret

Make sure the permissions for the /usr/local/etc/racoon/psk.txt file are 600, or the Racoon daemon will complain.

Now we are ready to start up Racoon, and enable IPSec. I recommend starting Racoon on the sensor and gateway in separate windows, using the 'racoon -F' syntax to show Racoon running in the foreground. Next enable IPSec via 'setkey -f /etc/ipsec.conf' on each system.

You can test your IPSec tunnel by pinging the gateway's gif IP address from the sensor:

That's it. You will also be able to ping the backend (192.168.1.10) from the sensor, or ping the sensor (10.4.12.10) from the backend. It will all be encrypted via IPSec.

If you've been trying to deploy IPSec on FreeBSD, and have followed certain threads, you'll see I encountered no issues with enabling FAST_IPSEC and INET6 in the kernel. I also did not have to exempt port 500 UDP key exchange traffic in my /etc/ipsec.conf file. Those two problems seem to have been ironed out with FreeBSD 5.3 RELEASE.