Your company’s Unix infrastructure is likely the most important piece of the overall IT puzzle. It runs your mail, your Web servers, and probably your most important business applications. The security of this infrastructure cannot be taken lightly. In this article we will explore some best practices that everyone should be aware of, and then next week we’ll talk about implementing these ideas with a sample infrastructure.
So what comprises a Unix infrastructure anyway? This is difficult to define, but we can general. Most people have customer, or public-facing, services. These are “public” servers. Anything that provides a service to the outside world is special. There are also machines that users can login to. This can be an actual user, in the case of an ISP or university, or possibly a development team in the corporate world. We’ll call these login servers, and they’re treated specially too.
Then there’s everything else—probably the bulk of your infrastructure—the machines that provide services to other servers, which only administrators can access. This is a bit of a gray area, but for the sake of brevity, just go with it.
First and foremost, one must look at all servers that provide services to the world and ask, “Do they need to?” Often times, they can be placed behind a firewall, or combination firewall and proxy device. If, for example, you run a customer-facing website on four web servers, it may be possible to minimize the exposure of these servers. A proxy server (or pair of redundant proxy servers) placed in front of these web servers can accept all customer connections, and subsequently inspect and sanitize requests. That’s what proxy servers, among other things, are designed to do. The proxy can mitigate the risk of the back end Web servers, which by the way, no longer need to be Internet-accessible.
The most frequent cause of security problems is un-patched or unknown services. The long since forgotten Web server that runs an old version of Apache or vulnerable PHP scripts, with an outdated kernel. This recipe for disaster is all too common, but if all your Web servers are hidden behind a proxy server, there’s little risk of forgetting one.
It’s the same story with all other services. Many sites have such extreme restrictions that a firewall administrator must approve any new network usage, and that works well for them. More often, a company is completely wide open. Their Web applications are insecure, and the servers that the applications interact with are Internet-accessible for no reason at all.
In October we ran a few articles about Unix multi-user security. The first article, about why securing a multi-user system is difficult, presented some ways to lessen the risk. Second, we preached about the perils of misusing sudo. Neither can be reiterated enough: multi-user is difficult.
Remote users are limited to using what interface they are given, such as e-mail services, web applications, or B2B transactions. Local users—people with shell access—are completely unfettered. If you happen to have a malicious user, they will get root access unless extreme caution is taken. Updates, especially those pesky kernel updates that require a reboot, must be applied the day they are released. The operating system must be hardened. Great care must be taken in the entire design of your infrastructure to ensure that users only have access expected areas.
If you also have developers whom require root access to some machines, you’re in a world of hurt. It’s unlikely that the developers themselves will be malicious (but don’t exclude the possibility). Instead, it’s normally the strange and newfangled applications they unknowingly install that will bite when least expected. The Slammer Worm propagated so quickly because MS-SQL was installed on thousands of computers that nobody knew about; oh the joy of automatic installers.
Next week we’ll talk about updates, NFS, OS hardening, and file system permissions. For now, just be (in)secure in the knowledge that these are the most difficult machines to manage.
And then there’s everything else. In theory, the bulk of a business’s machines are not Internet-facing. Assuming this is true, since limiting one’s exposure is a high priority for all companies, we can sort of ignore those servers. The only vulnerable point on them is the interface they provide, right? Assuming my Web application is patched regularly, there’s no need to worry about the operating system itself. Yes, some people really do believe this.
If you’re in the type of business that is able to restrict all login access to sysadmins only, then that is mildly true. Keeping the applications patched just might be enough to get by. But when the one security hole is missed, probably because it was so indirect it wasn’t even considered, your entire infrastructure is at risk, not just that one server. Once an attacker is finally inside, they generally find that spreading to other servers is very simple. It doesn’t have to be.
See, there are two approaches to securing an infrastructure: limit exposure and hope the unthinkable doesn’t happen, or secure oneself such that an attacker, should he happen to penetrate your defenses, can’t do anything harmful afterward. Why not both? While it’s almost always true, most organizations are unable to admit that their security policy fits into only one of these categories.
Firewalls are extremely easy to circumvent, especially when the exposed applications are at risk themselves. In fact a seemingly bulletproof firewall often attracts more attackers, not because they want a challenge, but because they know the inside is very likely softer than the shell. That said, we mustn’t forget: Most businesses still have a soft exterior as well.
Come back for next week’s article, which will provide a generic infrastructure example with the technical details necessary to prevent a gooey infrastructure.