Posted
by
kdawson
on Friday August 24, 2007 @12:33PM
from the hmmm-ls-looks-funny dept.

This blog entry is the step-by-step process that one administrator followed to figure out what was going on with a cracked Linux server. It's quite interesting to me, since I have had the exact same problem (a misbehaving ls -h command) on a development server quite a while back. As it turns out, my server was cracked, maybe with the same tool, and this analysis is much more thorough than the one I was able to do at the time. If you've ever wondered how to diagnose a Linux server that has been hijacked, this short article is a good starting point.

Why Slashdot would such obvious anti-Linux FUD is beyond me. Maybe the M$ advertising dollars are turning their heads.

The bottom line is that a LINUX SERVER CAN'T BE CRACKED.

Maybe this admin got his login info phished by Nigerian scammers, I don't know. The guy probably is wondering why his Ebay account has a bunch of negative feedback and his MySpace is all jacked up and hasn't put 2 and 2 together with that time he responsed to that clever email asking for the triple whammy of MySpace/Ebay/root on your servers so that you could clear the money transfer.

That or he didn't have his updates turned on and had an outdated BIND. And its not like BIND means Linux is unsecure.

Even not that the idea that Linux is crackable is laughable and not worht front page at digg let alone slashdot. You don;t see Technorait or Bruce Perens' site posting garbage like this ever so why slashdot editors can't see thru it i dont kno.

Great attitude to have. It's like saying "no one can pick my front door lock". Vulnerabilities are found all the time, and just because they are found and patched, doesn;t mean that someone couldn't have exploited them before that point.

I had a co-lo rental from Pipex. Linux 2.2. They noticed it was broken in to, cut us off, charged us to re-image the box on which they had left a tar of the drive. OK sounds fair enough, but they re-imaged it with EXACTLY the same Linux 2.2 install and it was infiltrated again by the time I got the email telling me it was back on. I fixed it by hand and never told them lest they charge the company again. Happily I quite soon after.

No logs, so no way to tell, but during the break-in they ssh'd into another box as the ftp user, so if this server had an ftp user set up, it's possible they did the same thing, trolling for open ftps.

Forgetting to log out happens all the time. In College, anyone who forgot to log out from our unix student lab would find the next day that they had sent obscene emails and love poems to a few select faculty members.

Well said. It's like two teams of kids at Easter, one team wearing black hats, the other white. Let the kids loose in a field full of Easter eggs. Most people like the white-hat team better, as they don't pick up their eggs and throw them at other people. However, that doesn't mean the less-liked team will find fewer eggs, nor refrain from throwing them. I'll have to quote Scotty again:

"The more they overthink the plumbing, the easier it is to stop up the drain..."

Where did the word forensics come from? This is the completely wrong approach if working forensically. Can slashdot please use not use sensational titles! "Analysis of a cracked box" maybe more appropriate.

For those that do not knowingly experience 'cracked' linux boxes (re: not knowing everything to look for), articles like this are a great way to learn from others. Kudos to 'lars' for sharing his findings with the world and reminding us all that security is an evolving process.

This article is somewhat helpful as it does show one way to catch crackers, although he goes about it somewhat clumsily (an "ls" command that doesn't accept a flag you know to be valid, especially when that flag has been aliased on your own shell for months, should instantly tell you you have a cracked box) and the method by which he finds out where the rootkit is is due to a mistake that most non-moron crackers would not make (neglecting to remove the.bash_history file).

It's unfortunate that this cracker made such an elementary mistake, it would have been interesting to see more advanced techniques in detecting rootkits. However, his analysis of the rootkit itself does provide some good information as to what a typical rootkit will generally do (replace core binaries, hide itself, use innocuous-looking names, etc).

And the most important question is, how did he get access in the first time? The server was running Ubuntu 6.06 LTS (i386) and was fairly updated. The compromised could be caused by:

* An exploit unknown to the public.
* A user accessing this server from an already compromised host. The attacker could then sniff the the password.

It's a very good question, because if the guy was keeping his server up-to-date, then these two are the most likely scenarios.

On tools...it's important to note that in forensics on a Linux box, your friends are ethereal (for watching packets on open connections), netstat (to see what's listening), and strace (shows you what UNIX API calls a running process makes, which gives you very good idea about what's going on.)

Other tools: nmap may be useful for seeing what's going on with 62.101.251.166 and 83.18.74.235. The service detection options, in particular. Always do this on a sandboxed host. Something running in a VM might be useful in this regard.

Anyway, nice article. This is almost exactly how I proceeded when one of my own servers was hacked a few years ago.

Bruce Schneier posted this a few days back [schneier.com]. Consensus is that it's not that good an analysis, but that the attacker was even worse. Some discussion also of whether it is better to take the machine offline immediately (and risk alerting the attacker that he has been rumbled) or to begin your analysis with the machine still live and operational. I for one side with the 'shut that thing down NOW' faction.

On the other hand, shutting down the box ASAP makes it much harder to find the guy.

For example, one of Vodafone Greece's first reactions to finding that some of their switching systems had been rootkitted was to remove the offending software. This removal was one of the main contributing factors to the authorities having no chance to ever find the group that had compromised the system, that along with a couple of other screwups led to Vodafone getting fined a pretty hefty sum.

Or do I play amateur cop / responsable citizen (depends on your point of view), and try and sniff and smoke the bastards out?

Tough call.

Having said that, some of my clients are massive multinationals, (like Vodafone), and they seem more preoccupied with cutting costs than taking this kind of threat seriously. Whilst a local entity - to take your example, Greece - could not necessarily

I think it's probably the fact that the owner of this system had the root password set to "GOD" as all good sysadmins do. The hacker's extensive experience hacking the Gibson made getting into this system a cakewalk.

Clearly, we as sysadmins should rethink the long-standing policy of setting all root passwords to either love, secret, sex, or god. Perhaps we should at least add another password to the list, like "unhackable" or something truly secure like that.

There's a few things which immediately spring to mind:1. We already know that it was meant to be running Apache. Perhaps there was some PHP application which wasn't very secure? Even so, if that were the case then the exploit they used must have been fairly convoluted because it probably wouldn't have got them root access immediately.

2. We don't know what other services were supposed to be running, how/if they were firewalled and secured. SSH, for instance, is only as secure as the weakest password on

All of these will help only if it is cracked by amateur sr1pt k1dd10tz like in this case. If it is cracked properly you will not see anything or spook off the intruder. He will either go underground or destroy the box with all of your data (not that you should try to use it as it may have been altered).

I have seen a number of rootkits for Linux as far back as 97-98 which were considerably more advanced. It was a bit of an arms race between the admins (including me) and the guys who were breaking in. By the end the best rootkits could:

1. Load a whole hidden fs with tools into a ramdisk or hidden area on the filesystem not visible using normal tools.2. Hide all sockets, processes and files belonging to the rootkit completely. You simply could no longer see them using netstat, ps and other similar tools.3. Monitor network driver state for the promisc flag and "scrub" backdoor traffic out of it so it is no longer visible using tcpdump and ethereal.4. Adjust memory totals and df so that you do not see them. This was also the only way we found to catch it. Try to allocate 95% of the remaining free memory and see the system oops magestically.5. Doctor logs so that you could not notice anything.6. The rootkit itself handled all connections via something that looked like ssh. I never managed to figure out how it loaded. One of the executables in the system loaded at startup was backdoored. Probably sendmail or one of the other daemons it could not do without.7. The rootkit managed to masq changed files completely. Tripwire and md5sums were reporting all OK while executables were being changed.

That was a the tech level in 97. I would expect 10 years later a good rootkit to be even better. Looking at the blog post I can only laugh.

If you suspect a system is cracked:

1. Take it offline and take the disks out. Analyse the system completely offline looking at the disk from another system mounted as ro (on SCSI discs use the RO jumper). Never ever even try to start it. Nowdays knoppix is a great help. Most importantly - do not fsck systems before mounting as the rootkit may hide in orphaned areas which fsck will fix.

2. If you are monitoring traffic, monitor it on a switch span port or create yourself a simple multiple interface box which serves as a firewalling bridge (so you can hijack the more interesting bits and alter them). Lex Book PCs are a good choice as they can run either Linux or BSD and are as portable as a laptop. A recent Via with 2 Ethernet ports is also a good choice as it can handle up to 1GB of traffic across as a bridge.

Correct. Always pull the plug out of the wall the moment you suspect that something is wrong. This is what I meant when I said - take it offline (my fault, should have written it better). If it is compromised the data on it is worthless anyway and you need to go back to backups so the loss of data from pulling the plug is trully in the "who cares" area.

And disks have gotten very good in the last few years. I haven't seen any (immediate) data loss from hard power cycling/plug pulling in I don't know how long. A former co-worker used to turn her G5 off every day by pressing the front button. I saw her do this once and said (very nicely) "You know it's better to shut down from the menu, right?" and she answered "Yeah, I know you're not supposed to do that, but it's faster." She had been doing that nightly (or maybe just weekly) for a couple years.

A former co-worker used to turn her G5 off every day by pressing the front button. I saw her do this once and said (very nicely) "You know it's better to shut down from the menu, right?" and she answered "Yeah, I know you're not supposed to do that, but it's faster." She had been doing that nightly (or maybe just weekly) for a couple years.

On a G5 (and, indeed, most PCs anf Macs <6-7 years old) pressing the power button should result in a clean shutdown.

If I suspect something is wrong with my home machines and I didn't care to figure out what happened, I'd just revert the relevant virtual machine to a clean snapshot, disconnect the network connections and patch, restore data etc.If I did care, I could either suspend the virtual machine or make a snapshot of it.

Virtual machines are cool:). Once x86 hardware gets more efficient at running VMs (including IO), I think I'll run everything virtualized. You can't get away with doing that red pill, blue pill thin

Would holding the power button for 5 seconds alert the root kit? Because that's what I normally recommend instead of pulling the plug. It would be a shame to damage the power supply just to shut down the computer.

I'm afraid that most software tools are not inherently better than those in 1997: most attackers, and even most successful attacks, are by script kiddies with tools. Even skilled crackers like Mitnick consistently make foolish mistakes. (In Mitnick's case, it was leaving messages mocking his victims and getting the FBI really, really mad at him,, angry enough to actually prosecute.) There are plenty of vaunted crackers who make other amazingly stupid mistakes, both programming and social.The IRC-bot creator

I have no idea why you say "Couldn't agree less" when eliminating most if not all IRC services is exactly what I meant. Open relay mail servers are relentlessly exposed, hounded, and blocked by most email servers.But by the way, do not begin to pretend that most installations of Jabber are any better administered than most installations of IRC. Plain text passwords stored on the server is just an amazingly bad idea: it's almost as stupid as Subversion keeping your user passwords in plain taext in your home

Sorry, too high blood level in the caffeine subsystem when posting the GP. I was in absolute agreement. IRC must die.As far as Jabber vs IRC vs the rest of the IM I agree they all suck and they can all be used for zombie control. You can write a BOT that logs in on yahoo, AIM or anything else you like. I used to have a Yahoo Messenger BOT that talked to a MON alert system and pinged me when something went apeshit in the network (you could also get network status and such). Writing it was quite trivial, unfo

In my case the attacker did not leave the rootkit on the system. We never managed to find it.We found a couple of backdoors now and then none of which was particularly fancy. For example sendmail had an extra command added which executed a shell, etc. So I suspect that he loaded the rootkit straight into memory over the network after accessing the compromised machine through the backdoor. As a result it was never present for forensics.

The most unpleasant bit was that he nuked the machine at the slightest su

I dunno. what about automatic account lockout after 5 unsuccessful tries and stuff like that?There are some things that can be done to prevent dictionary attacks from working. Or at least from working enough that they would succeed. there used to be an email program you could run (monkey business or something like that.) that would just return hits on anything tried other then the actual accounts. It was designed to make harvesting addresses as useless as mailing to the dictionary itself. I don't see why so

Forensics has to be useful in court. This is not - it's tainted evidence. Now if they took the original disk out, copied it with DD or similar to a file and mounted it as loopback and worked on that, then that's a first start to a forensic analysis.

Uh, just because the term "forensics" is sometimes used in a limited sense in the legal sphere doesn't mean it can't be used in a more casual sense elsewhere. If he'd called it a "postmortem" would you be complaining that it wasn't performed by a licensed medical examiner?

The definition of the word forensics is, "The use of science and technology to investigate and establish facts in criminal or civil courts of law." The original poster's argument is correct. This was not forensics. It was an analysis.

In computer security, 'forensics' has a well established meaning. Any computer security class will teach proper forensic procedures that preserve the trail of evidence for use in a court of law. As this is an article about computer security, I and the other posters naturally assumed the word was used in that context. This analysis is not proper forensics, and the evidence gathered would likely be inadmissible in court.That was what was meant. You can argue semantics and definitions all you like, but anyone

When I said 'that was what was meant,' I meant that the posters to whom you were replying were using the word forensics in the proper, computer security related context. You presume too much in assuming you know what others meant. Crow your triumphant pedantry to the world, it won't change the fact that we are all laughing at your utter lack of knowledge.The funny thing is, even the definition you tried to apply does not fit. The term 'forensics', when used in the context of 'an argumentative exercise' mean

Does rtkhunter send you a email when the cracker changes/usr/bin/rtkhunter so that it won't email you the attacker's changes?

If you think that rtkhunter will protect you from a Linux kernel module rootkit your completely delusional. NOTHING will _reliably_ locate a LKM rootkit. That's the point of it.

Think about it. Rtkhunter relies on the ability of the kernel to accurately indicate files sizes, file names, and running proccesses as well as a bunch of other little detail things that normal rootkit makers tend to get wrong. When that kernel is subverted and controlled by it's new owner to give rtkunter, as well as other processes (such as your bash shell) false information about the system then those things are completely worthless.

It's the same as virus scanning on Linux (or any other system). Once the attacker gets root access then they have access to the kernel. Once they have access to the kernel they can use the kernel against you to hide what they are doing. Since userspace runs on top of the kernel then any sort of activity can be hidden by making the kernel lie to anything running in userspace.

This includes logging daemons, rootkit detection software, administrators, virus detection, rpm checksums, or anything else that people use to give themselves a FALSE sense of security.

There are two ways to reliably detect a rooted machine.

The first way is to use a network-based Intrusion Detection System (IDS). One of the best ones is commercially supported open source application called Snort. These guys can be hooked up to networks in a passive and completely undetectable way and are used to monitor traffic. They will alert administrators to any unusual network activity.

Network based IDS can be fooled, but as a administrator your at least operating on the same playing feild since your own software isn't used against you.

The second, and more reliable way, is to use a checksum-style IDS. MD5deep, AIDE, or Tripwire are 3 very good examples of this.

However how people use these things are completely worthless. If you keep the checksums and run the checksum software on the same machine as the one your trying to detect, then it's not good. Since they rely on the kernel any kernel-level rootkit can defeat them and the attacker can edit and substitute incorrect checksums.

In order for stuff like AIDE to be usefull it needs to be ran from read-only media and from a different operating system then the one your checking. (for example booted up in a knoppix cdrom, or a removable disk in a dedicated unconnected-to-any-network 'Tripwire' machine)

Both forms of IDS are very expensive and difficult to correctly use. Virtual machines make this stuff somewhat easier, but it's still much better to have dedicated machines for these things.

rtkhunter is nice if it's job is to make you feel good. If it's job is to make sure your machine is secure then it's shit. (no offense to the rtkhunter authors, I am sure they understand it's role and effectiveness.. to bad their users don't tend to) It's only good for kiddies that don't know better and if your being owned by kiddies then you have bigger problems.

There's an interesting third approach, used by Sysinternals's (now part of MS) RootkitRevealer for Windows.

Basically, enumerate all the files on the system using the usual OS APIs. Then, scan the entire raw disk, and enumerate all the files on the system by manually interpreting the directory structures stored on-disk. Any files whose directory entries exist on-disk, but don't show up in the OS's API (with a few standard system exceptions) are being hidden from the OS API layer by a rootkit.

Hmm, I forsee a tiered approach being useful, much like what Windows does.First, there's the administrator permission level, which is supposed to be the Windows equivalent to wheel (and Administrator being the Windows equivalent to root.)

We had a cracked linux server at work one time and I took it upon myself to find out who did it. Long story short: some server monkey decided it would be a fun idea to ride his bike around inside the data center and smashed into one of the racks.

1. Infect Linux server of some guy with a blog.2. Guy blogs about how he dealt with said infection.3. Blog posting gets linked to on Slashdot.4. Millions of computers attempt to access the blog, hence bringing down the server.

Don't you see? We've a socially engineered botnet!

(And please, for the love of all that is sacred and funny, don't reply to this and add steps for "???" and "Profit". It's just tired and completely not funny. And the clever little variation on that theme you're thinking about posting right now isn't funny either.)

I got hacked back in February - March 2001 time-frame. I made the mistake of setting up my Linux server as a router, and left my Samba and NFS shares active. This kind of info would have really helped me then.

Does Ubuntu install selinux and a policy in a default installation, or is it necessary to add it later?

I've only performed one Ubuntu install and most of my experience is with Red Hat and Fedora linux distros. Fedora installs selinux with a targeted policy enforcing by default which I think is a good thing. I had an experimental Fedora web server with PHPbb installed which was comprimised via the PHPbb application but looking through the log files it appeared that selinux had thwarted attempts to root the box or setup a zombie to connect to an irc server.

Other than the mistake of an outdated PHPbb application I also made the mistake of allowing execution of code in/tmp, lesson learned. But it was interesting to see selinux do its job and I'd be curious if it was utilized in this instance.

I think SElinux is still a mixed bag when it comes to distribution support. My attempts at using SElinux with Debian have been disappointing. Red Hat AS4's SElinux works out of the box but, it is not enabled by default.

Ubuntu, as of the latest version (Feisty Fawn), does not install SELinux. If you want that functionality, you'll have to install it yourself. I think this is because SELinux policies can be difficult for beginning users to navigate. Also, when SELinux thwarts execution of some file, there is often no explicit message stating that the file was blocked by SELinux, please change your configuration. In all too many cases, the user is left on their own to figure out why their file isn't executing.

I work in a large, low-end datacenter. Almost all the servers there are rented buy non-technical people, who for some reason feel qualified to run web hosting businesses. There are so many exploits going on there at any given time, we can't really do anything about it--especially as theoretically the customer is responsible. So when they call in because their server is running slow, we usually find a php hijack happening, tell them their server has been compromised, and suggest that they do something about it.

It's pretty appalling. We would need an army of sysadmins--an army which is currently employed already--to really do something about it. Most of what we see are primitive script kiddie hacks, but guess what--that's good enough, and rarely are the perpetrators hunted down.

Not only are you right, but even when the perpetrators are hunted down, nothing happens to them. Take a look at the Morris Worm and the David Lamachia case for good examples of how perpetrators escape punishment. (Morris's father was the head of the NSA: LaMacchia was an MIT student and MIT's lawyers did a great job of stonewalling the prosecution to avoid a student being convicted, and apparently encouraged the prosecution to file charges trying to set an unlikely precedent, making people who host warez re

I wouldn't be to critical of the techy in this situation.
It's more about 2 screwed up business models (If you look at it from a technical point of view).
They want cheap servers with bandwidth, buy cheap servers and buy shitloads of bandwidth. Offer them for really cheap prices ( 10,000 Servers. They may have five or six people on a shift for maintaining these systems. These guys are responsible for patch management and backup/restore, plus they have to physically replace the systems which crash (Usuall

We have numerous servers on various subnets and have learned a few elemental precautions to having more than one server cracked:

1. Change the ssh port to something other than 22;2. Use different root passwords on each machine;3. Use selinux to block connections from IP addresses you do not control and to ports you don't want the machine connected to (like 6667);4. If possible route all packets through a bridged machine which you can then use to monitor activities... be especially wary of IRC connections;5. If you have email users set them up as nologin or/bin/false;6. If you use ftp do not allow anonymous logins or, if you must allow connections, do not allow anonymous uploads;7. Configure syslog so that it logs to several locations; and,8. Use access lists on the routers to limit connections both in and out (including the new ssh port);

Crackers often forget to change lsof (list open files) and that utility can often be used (or reinstalled) to determine if a machine has been cracked and where the nasty bits are hidden.

Okay, I've not yet RTFA. Did it specifically say, "bog standard Ubuntu 6.06 with absolutely no additional software and only bare necessary configuration changes needed for system differentiation purposes"?I ask because everyone seems to be looking very closely at the initial OS distro, and almost any server that's been put into useful production has been tweaked in some way from the official packages. Stuff gets compiled from source. Custom stuff gets coded. Packages get installed out of third-party reposit

And the most important question is, how did he get access in the first time? The server was running Ubuntu 6.06 LTS (i386) and was fairly updated. The compromised could be caused by:
* An exploit unknown to the public.
* A user accessing this server from an already compromised host. The attacker could then sniff the the password.