DDOS attacks’ ultimate lesson: Secure that infrastructure

Compulsory Security?

If Janet Reno gets her way, best security
practices may someday become compulsory to doing
business on the Internet. This is already
beginning to happen in government systems.
Starting in its fiscal year 2002 budget, the
Office of Management and Budget will not pay for
new systems that are inadequately secured.

It’s only a matter of time before security
imperatives reach the private sector. Take the
National Infrastructure Protection Center
(NIPC), for example. For the past eight months,
NIPC, managed by the FBI, has been archiving
reported private-sector security problems and
events. NIPC is already recommending best of
practices private sector businesses it deems
critical to the day-to-day operations of the
national infrastructure.

The Big 5 accounting firms are also in the
security game – to ensure the crown jewels
(a.k.a. intellectual property) of their clients
are insured. After all, they owe it to
shareholders. Law firms are raising a new army
of cyber attorneys and looking at e-business
liability issues. And insurance companies are
hammering out new policy directives that will
soon apply security due diligence to policies
carried by companies with any Web

Distributed Denial of Service FAQ

Distributed Denial of Service Research Center

The media frenzy surrounding the February DDOS attacks on Amazon, Yahoo and e-Bay isn’t surprising. Would-be consumers, Janet Reno, even the President realized just how easy some of Wall Street’s favorites can be taken out of business. It was a real wake up call.

What is surprising, at least to those who’ve been following e-commerce and security for awhile, is the media’s failure to look into the core problem, which is shoddy infrastructure. If companies developing their Web stores paid more attention to security, these attacks would not have been nearly so successful.

Security professionals lament that e-businesses have been slapping up their electronic store fronts, with little thought to security until the inevitable happens and they get hacked. DDOS attacks are just another step in the elementary hacking ladder. DOS attacks will continue to morph into more distributed, powerful and undetectable strains, making these types of attacks particularly troublesome.

But by following best of breed security practices, many an e-business could at the least minimize their downtime to 10-15 minutes instead of the 2-4 hour lapses that occurred in February. At best, they could deflect the attacks altogether.

But to understand the most efficient way to harden that infrastructure, we first need to dissect the elements of the February DDOS Trin00 and TFN attacks.

It starts with a “master machine” (the attacker’s machine), which delivers a Trojan to the “slave” machine. This Trojan is a tiny, easy to hide program poised to launch DOS attacks to an intended target.

Now imagine thousands, possibly hundreds of thousands of these Trojans infecting networked machines with a Web front end, then waiting. At the intended time, or at the command of the attacker, just hundreds of these “slaves” fire a load of 800 Megabytes to one gigabyte per second at specified targets.

These masses of data rocket through a string of ISPs and onto the target where they overwhelm the routers, clog the pipes, and render the servers useless.

Each of the organizations along this chain – the “slave,” the ISP, and the victim – hold some culpability for lax security that helped make these attacks successful.

The Slave’s Role

Let’s start with the first link, the slave that haplessly launches the attack. Most of them have been located in universities with high speed Internet connections. Universities are notoriously the least-protected network systems on the planet. Even before Kevin Mitnick started infiltrating England’s Leeds University VAX machines back in the late 80’s, university machines had long been used as launch points for hack attacks.

Bob Hillery, information systems management professor at New Hampshire Community Technical College explains, “The good news about open university is it’s open and you can get lots of information. The bad news is it’s open.”

There’s open. And then there’s wide open – as in no firewalls, no intrusion detection, and no security policy. Suffice it to say, .edu’s make dangerous Internet neighbors.

“I worked for eight years in academia. The entire model of how most universities set up their IT systems is bottom-up,” says Ken van Wyk, CTO of Para-Protect, Inc., an IT security services firm in Alexandria, Va. “Professors, grad students, and even dorm undergrads set up and maintain their own systems. I do not know of a single university that is running a firewall on its campus network. In the business world, that would be unacceptable.”

Universities, and any business with powerful servers connected to the Web, have a moral obligation to set and enforce best security practices.

“I have the least patience with these universities and companies who allowed themselves to be used as launch points,” says Jeff Johnson, president of META Security Practices Group in Washington, D.C. “If you’re going to take part in this Internet economy, it’s your responsibility to follow basic security practices.”

Johnson contends it’s only a matter of time before businesses are held liable if their equipment is used to launch attacks on their Internet neighbors (see sidebar).

Anatomy lesson

Back in August, security professionals at the
Department of Defense, along with private
security firms like Para Protect, Inc., in
Alexandria and groups like SANS (System
Administration, Networking and Security –
www.sans.org), began finding hidden executable
Trojans on servers, mostly in university

One of these .edu’s actually followed the
command trail back to a “master machine,” but
accidentally destroyed its own evidence,
according to sources at Para Protect.. Another
.edu removed the Trojan from its system only to
be hit with a massive DOS attack that blocked
its Internet connection for hours.

When they dissected these compact apps, they
realized these Trojans were poised and ready to
do the bidding of a yet-to-be-found “master

The security community galvanized around
possible scenarios and solutions. In December,
Carnegie-Mellon CERT sent out an alarm warning
of an imminent, large-scale massive DOS attack
that could cripple the e-economy.

Antivirus and Intrusion detection vendors
started covering the attack signatures for what
were now called the Tribal Flood Network and
Trin00, both of which were floating around on
the hacker undernet. Meanwhile, the attack
signatures were morphing into undetectable
strains, thanks to compression tools like those
offered in the Back Orifice 2 package. As a
result, these attack signatures continued to
load onto victim machines under any radar of
intrusion detection or anti-virus tools.

CERT, SANS and others thought the attacks
might hit on New Year’s Eve., which turned out
to be a very quiet night. Finally, in February,
the inevitable happened. Internet giants like
Yahoo, e-Bay, Buy.com and Amazon started
toppling like Dominos.

For a moment, it looked as if the sky were
falling on e-commerce. But like all things in
cyber space, that moment came – and passed – at
light speed.

Johnson isn’t talking bullet proof security. Buggy operating systems, immature vendor solutions, interoperability problems, heterogeneous networks, etc., make bullet proof unreachable. But he is talking about covering all the bases.

“I’m talking about reasonable means that would lead to quick detection and containment,” he adds. “Those used to launch the DDOS attacks were wide open, flat networks – no firewalls, no policies/procedures or requirements.”

At the very least, basic security policy for those businesses and universities connected to the Internet should include:

– Monthly (or more frequent) scans for Trojans and vulnerabilities. Jerry Zepp, Chief Security Officer for Comstar.net, a business hosting ISP in Atlanta, is particularly fond of the scanning tool available free from http://www.nessus.org/.

– Redundant routers, firewalls, intrusion detection, and anti-virus.

– Current browser and operating and system patches.

– Closing unused UDP, TCP and FTP ports.

– User awareness. Since Trojans are often downloaded in email attachments, remind your users not to open .exe attachments.

The middle link, the ISP

There are also a number of questions around the practices of the ISPs who allowed these massive data storms to pass down to some of their most lucrative clients.

Why didn’t the ISPs see this data storm coming? And why didn’t they alert their clients of impending data flood, especially when some network management and other vendor tools automate this process? (Check out ODS Networks’ Computer Misuse and Detection System Early Warning System at http://www.ods.com/)

First, while traffic management and shaping is possible, ISPs haven’t really figured out how to work this into a service level agreement. Inversely, ISP customers don’t know how to ask for it.

The second problem is speed. To put it plainly, load balancing slows down traffic, something ISPs (and their customers) are loath to do, says Ian Poynter, president of the security consulting group, Jerboa, Inc., in Cambridge.

“There are various ways to check load at the router and throttle back the connection if you all of a sudden see a large spike,” adds Brian Koref, chief security officer at the business hosting services provider, Conxion, Corp. in San Jose, Calif. “But a lot of ISPs don’t want to put this on routers because it has a significant effect on performance.”

Filtering and load balancing also taps ISPs for more expensive equipment and administration. And, in smaller ISPs which lack 24 x 7 NOCs, these attacks would go completely go unnoticed, continues Poynter. In those cases, Poynter argues that ISPs could augment their costs by charging premiums for such services, the same way phone companies do for caller ID.

Comstar’s Zepp says that ISPs, as a result of the February DDOS attacks, are beginning to implement filtering. And, they’re also forming consortiums and pledging to work together to identify vulnerabilities and track attackers between hubs. In February, some 23 ISPs, Comstar included, joined ICSA.net’s Internet Security Alliance (www.icsa.net/html/communities/ddos/alliance/index-shtml).

The final link: the victim

Since the best deterrent is fear of capture, the ISPs have the right idea. But boiling this down to a realistic, workable process will take some time. So potential victims of these DDOS attacks had better come up with a back up plan.

Surprisingly, those e-businesses timed off the Internet by the February DDOS attacks didn’t even have a back up plan or fail over systems, contends META’s Johnson. He believes if they’d had a contingency plan and redundant routing, their services would have been back in business in 10-15 minutes instead of 3-4 hours.

“We’ve talked to a few of these organizations that were attacked. And during their engineering planning, they didn’t take the right precautions to ensure availability if their main site went down,” he says. “They had no redundancy in place. They didn’t have multiple servers, roll-over lines and routers.”

Worse, at least one of the high profile sites that got hit didn’t even have a firewall on its production systems, according to van Wyk. Not that a firewall would have helped in the DDOS scenario; but this underscores what security professionals grapple with everyday: patching up hastily-built e-storefronts with little thought to security.

“The intense pressure to get to market first has caused many good practices, like security, to get pushed back until ‘later’ at many e-commerce sites,” he says. “I’ve personally seen dozens that are completely unprepared for any significant attacks.”

In addition to no backup systems, e-businesses have inadequate perimeter security, says META’s Johnson. Most front their Web servers with only one or two routers. A stronger perimeter would front each of their main routers with four inexpensive routers. In the Feb. attacks, this could have greatly reduced their choke point. He also suggests multiple intrusion detection systems behind this perimeter.

DDOS vulnerabilities must also be addressed at the server level by limiting the number of connections to a box and the amount of time a TCP session remains unfulfilled, adds Koref. (One type of DOS attack, SYN flooding, befuddles a server into not accepting new traffic by opening a session but not completing it.)

“I’m sure some of the victims thought they were fine – until real world testing proved otherwise,” says Poynter. Which leads to the last, and final good that may come out of the February DDOS attacks on some of our most visible e-businesses: Raised awareness.

But was it enough to bring about some very needed systemic change? Time will tell.


Deborah Radcliff writes high tech crimes and enterprise security stories for ComputerWorld, Upside, NetworkWorld, and other business and trade journals. A former beat reporter for the San Jose Mercury News, Radcliff switched to her current genre five years ago while researching Kevin Mitnick’s life on the lam for the book, “The Fugitive Game,” by Jon Littman.

SecurityPortal is the world’s foremost on-line resource and services provider for companies and individuals concerned about protecting their information systems and networks.
The Focal Point for Security on the Net ™

Latest Articles

Follow Us On Social Media

Explore More