Network Security: When Should You Announce a Breach?
Security breaches are embarrassing, and sometimes they put others in danger. When should you break the code of silence and come clean about a compromise?
Drafting a customer notification policy is no easy task. The first and most obvious factor that must be weighed is the industry in which you operate. Some retail businesses, for example, may store peoples' addresses, phone numbers and credit card numbers. A total site compromise will likely mean that all customer information is leaked. Other compromises may only leak company secrets—something the customer couldn't care less about.
The health care industry has tons of government-mandated policies to comply with, so it's nearly a no-brainer: they disclose what they're told to. Businesses operating perhaps as a channel partner with a medical company may still maintain some private records, but not be subject to disclosure policies. These, and nearly every other type of business, needs to construct their own set of policies for notifying customers.
A few years back the Veteran's Administration lost millions of social security numbers when an employee lost a laptop. It immediately sent letters and e-mail to every present and former service member in the US. Multiple times, in fact, to give updates when new details were found. It wasn't a credit card number list or anything that could actually be dangerous, but nonetheless the organization immediately notified the press and then each customer individually.
When to Disclose?
Regardless of lost data, some proponents of full disclosure would like all companies to announce every little security incident. There are good arguments both for and against this concept, so we'll take the middle ground. Some security incidents probably should be disclosed to the public even when no data has been leaked, and others can probably be ignored.
"What? If no data was leaked, I don't need to tell anyone!"
What if it's discovered later that some vital customer information was in fact compromised? When making the dreaded "we lost your information" announcement, the impact is somewhat softened if you can refer to a past compromise which wasn't kept secret. By the time the discovery of lost information occurs, it's also likely that the attack vector has been identified and remediated. The public's view of the situation changes greatly when a company can say they know what happened, and that they have fixed the security hole.
There are four main types of compromise to consider when drafting a security disclosure policy.
Web site defacement. This is the most common, and most harmless type of security incident. If you're running an ISP, hosting company or university you likely have hundreds of defaced Web sites right now. A defacement means that somebody has taken advantage of a poorly-written Web application, and they were able to write data to a file. This normally takes the form of a "our group was here" type of message. This vulnerability can lead to other types of compromises!
User-level access. A user-level compromise means that someone has guessed a user's password, or that they were able to run a program via a vulnerable Web application. This isn't dangerous in itself, but it's much more difficult identify the breadth of the incident. The possibility that the attacker now has access to confidential information has greatly increased.
Root compromise. A root compromise on a Web server, for example, is dangerous. An attacker is able to read Web applications and discover where your database of customer information lives, and possibly some passwords as well. The compromised system will need to be reinstalled, and depending on its role, your entire site now requires an in-depth security audit.
Theft or physical compromise. Computer security isn't just about operating systems. Theft of a laptop or backup tapes is the most common way that data is lost. Physical security incidents can lead to, or even be motivated by an attempt at a computer system compromise. A homeless person found wandering around a restricted area probably doesn't need to be disclosed to the public, but most everything else does.
Often, the IT staff may discover a security incident, but fail to notify management. Management could also deem the issue unimportant and fail to notify upper management. Both of these situations need to be addressed in a disclosure policy document as well. It's more likely that the IT staff would be the ones hiding an incident because they feel personally responsible and ashamed when any type of incident occurs.
Even something as simple as a Web site defacement needs attention. It probably isn't worth a company-wide announcement, but the simple cases often turn into more complex ones once a proper investigation begins. A seemingly simple Web site defacement could have led to someone gaining user-level access to a Web server, and fan-out could have occurred from there.
Internal reporting procedures are highly company-dependent, but disclosure to the public should not be. Even when no data has been lost, a root-level compromise should always be disclosed to the public. California's new disclosure law seemed a bit harsh to some, but it is specific in that reporting only needs to occur when data is lost. We'd like to see all major security incidents reported, regardless of data loss. That information, along with the attack vector and resolution, would go a long way toward helping all sites become more secure.