So you suspect that something strange is happening with a server, but you’re not quite sure what. Perhaps it’s been compromised, perhaps not. Let’s talk about some methods for figuring out what’s up on Unix-based operating systems. It’s not always straightforward.
If you’re running a Web server or user login server, it’s even more difficult to tell if you’ve experienced a security incident. Seemingly strange traffic could be legitimate; you just don’t know. If your server is blasting out UDP packets as fast as possible, that’s a pretty good indicator that it’s being used in a DOS attack, but we’re talking about more subtle issues.
First, we need to understand the difference between a simple user account compromise and a root compromise. Unlike Windows, you don’t need to format and reinstall the OS if a user account gets compromised in Unix. They can’t do anything harmful, unless perhaps you’re behind on kernel patches. Most of the time, if there’s no evidence of root-level activity, you’ll probably be fine with just disabling the affected user account.
Is a Web page compromise a system compromise? Sometimes, yes. Most of the time a compromised Web page is just that: defacement. Frequently spammers will fill a page with tons of links and other strange looking data. Likewise, botnet owners may post exploit code so that other hosts can download it. These types of activities are quite harmless—just restore the page from backup and try to fix the attack vector so it doesn’t happen again.
However, spammers will sometimes launch PHP or Perl scripts on your web server, which will then start sending out spam. This type of compromise is easy to track down: there will be a process running as the Web server user. Unfortunately most of these exploits will download their code to /tmp, and then delete themselves once they’re running, so you don’t know how they were able to get in in the first place. This is where your Web server logs come in handy.
The lsof command is your friend. When you first find that a strange process is running, the first thing to do is check what files it has open. You may have discovered this process from your network people, who told you that a Unix server was joining (or running, gasp!) a botnet server via port 8881, for example. You need to figure out what process has that port open, and then see what other files it’s using. Most of the time you’ll find an exploit written in Brazilian or Russian stashed away somewhere in /tmp. Chances are that they downloaded it from another site via a vulnerable Web page. The Web server logs will show you exactly what PHP (most likely) script was involved. But chances are good that if they’re running from /tmp it’s nothing more than a user-level compromise. Repair the entrance point and get on with life.
What about a root compromise?
This is where things get a bit more difficult. Most of the time a guessed user account will lead to an attacker getting a local account, after which they will attempt to run any number of root exploits locally. This is a dangerous place to be, but it isn’t the end of the world. With properly configured logging, you may notice that a certain username is running a process that crashes via a segmentation fault (attempting to write memory that isn’t theirs). Inspect that account immediately.
If a remote root exploit is used you’re compromised at the get-go. Your first step is to find out what they’ve done, and next you get to figure out how they did it. The old method is to start searching for setuid files. Unfortunately, this is less than useful. There’s setuid files all over the place on a standard Unix install, and if you don’t know what’s “normal,” you won’t know what’s “strange.” Finding a root-owned setuid file in /tmp, buried in /dev, or in someone’s home directory is a pretty good indicator though.
Next, check log files and wtmp. If someone has logged in to your server from an unknown location, you know something is up. Also check for open ports, and try to telnet to them. Root kits often include a ‘bindshell,’ which simple listens on a port and provides a root shell for anyone who tries to connect.
The chkrootkit program is very useful for detecting root kits, assuming you don’t have a brand new root kit. It inspects all kinds of things; more thoroughly than I can explain here. Be careful running commands as root; they might have been tampered with. Most operating systems’ package management system includes some sort of checksum. You can verify that programs haven’t changed easily enough, though beware that it’s certainly possible to alter a package checksum database.
Of course, it’d be nice to know beyond the shadow of a doubt whether or not system binaries have been tampered with. This is where host intrusion detection software is useful. Products like Tripwire store a database of every file’s checksum on a central server. It’s a bit of a pain to configure initially, since the normal operation of a server results in hundreds of files changing daily. Once a sane list of files to monitor has been established, Tripwire become very useful.
There are many aspects of a server to check when a compromise is suspected. Each new clue will lead the investigator off in strange and unpredictable directions. A good, but dated, starting point is outlined on a CERT Web page. Programs like chkrootkit will uncover most of these items, but it’s still a useful review of the “common” items to check.
Every compromise is different, and the hardest part is discovering the attack vector. You want to prevent the intruders from compromising other server that may be vulnerable to the same thing, so do some investigation before reinstalling. Yes, you must reinstall a Unix server is root was compromised—root kits are tricky, and you can never be 100 percent certain that you’re repaired the system. Happy Hunting.