The chilling effect

How the Web makes creating software vulnerabilities easier, disclosing them more difficult and discovering them possibly illegal.

From buffer overflows to cross-site scripting

Three vulnerabilities that followed the responsible disclosure process recently are CVE-2006-3873, a buffer overflow in an Internet Explorer DLL file; CVE-2006-3961, a buffer overflow in an Active X control in a McAfee product; and CVE-2006-4565, a buffer overflow in the Firefox browser and Thunderbird e-mail program. It's not surprising that all three are buffer overflows. With shrink-wrapped software, buffer overflows have been for years the predominant vulnerability discovered and exploited.

But shrink-wrapped, distributable software, while still proliferating and still being exploited, is a less desirable target for exploiters than it once was. This isn't because shrink-wrapped software is harder to hack than it used to be -- the number of buffer overflows discovered has remained steady for half a decade, according to the CVE. Rather, it's because websites have even more vulnerabilities than packaged software, and Web vulnerabilities are as easy to discover and hack and, more and more, that's where hacking is most profitable. In military parlance, web pages provide a target-rich environment.

The speed with which Web vulnerabilities have risen to dominate the vulnerability discussion is startling. Between 2004 and 2006, buffer overflows dropped from the number-one reported class of vulnerability to number four. Counter to that, Web vulnerabilities shot past buffer overflows to take the top three spots. The number-one reported vulnerability, cross-site scripting (XSS) comprised one in five of all CVE-reported bugs in 2006.

To understand XSS is to understand why, from a technical perspective, it will be so hard to apply responsible disclosure principles to Web vulnerabilities.

Cross-site scripting (which is something of a misnomer) uses vulnerabilities in web pages to insert code, or scripts. The code is injected into the vulnerable site unwittingly by the victim, who usually clicks on a link that has HTML and JavaScript embedded in it. (Another variety, less common and more serious, doesn't require a click). The link might promise a free iPod or simply seem so innocuous, a link to a news story, say, that the user won't deem it dangerous. Once clicked, though, the embedded exploit executes on the targeted website's server. The scripts will usually have a malicious intent -- from simply defacing the website to stealing cookies or passwords, or redirecting the user to a fake webpage embedded in a legitimate site, a high-end phishing scheme that affected PayPal last year. A buffer overflow targets an application. But XSS is, as researcher Jeremiah Grossman (founder of WhiteHat Security) puts it, "an attack on the user, not the system." It requires the user to visit the vulnerable site and participate in executing the code.

This is reason number one it's harder to disclose Web vulnerabilities. What exactly is the vulnerability in this XSS scenario? Is it the design of the page? Yes, in part. But often, it's also the social engineering performed on the user and his browser. A hacker who calls himself RSnake and who's regarded in the research community as an expert on XSS goes even further, saying, "[XSS is] a gateway. All it means is I can pull some code in from somewhere." In some sense it is like the door to a house. Is a door a vulnerability? Or is it when it's left unlocked that it becomes a vulnerability? When do you report a door as a weakness -- when it's just there, when it's left unlocked, or when someone illegally or unwittingly walks through it? In the same way, it's possible to argue that careless users are as much to blame for XSS as software flaws. For the moment, let's treat XSS, the ability to inject code, as a technical vulnerability.

Problem number two with disclosure of XSS is its prevalence. Grossman, who founded his own research company, WhiteHat, claims XSS vulnerabilities can be found in 70 percent of websites. RSnake goes further. "I know Jeremiah says seven of 10. I'd say there's only one in 30 I come across where the XSS isn't totally obvious. I don't know of a company I couldn't break into [using XSS]."

If you apply Grossman's number to a recent Netcraft survey, which estimated that there are close to 100 million websites, you've got 70 million sites with XSS vulnerabilities. Repairing them one-off, two-off, 200,000-off is spitting in the proverbial ocean. Even if you've disclosed, you've done very little to reduce the overall risk of exploit. "Logistically, there's no way to disclose this stuff to all the interested parties," Grossman says. "I used to think it was my moral professional duty to report every vulnerability, but it would take up my whole day."

What's more, new XSS vulnerabilities are created all the time, first because many programming languages have been made so easy to use that amateurs can rapidly build highly insecure web pages. And second because, in those slick, dynamic pages commonly marketed as "Web 2.0," code is both highly customized and constantly changing, says Wysopal, who is now CTO of Veracode. "For example, look at IIS [Microsoft's shrink-wrapped Web server software]," he says. "For about two years people were hammering on that and disclosing all kinds of flaws. But in the last couple of years, there have been almost no new vulnerabilities with IIS. It went from being a dog to one of the highest security products out there. But it was one code base and lots of give-and-take between researchers and the vendor, over and over.

"On the Web, you don't have that give and take," he says. You can't continually improve a webpage's code because "Web code is highly customized. You won't see the same code on two different banking sites, and the code changes all the time."

That means, in the case of Web vulnerabilities, says Christey, "every input and every button you can press is a potential place to attack. And because so much data is moving you can lose complete control. Many of these vulnerabilities work by mixing code where you expect to mix it. It creates flexibility but it also creates an opportunity for hacking."

There are in fact so many variables in a Web session -- how the site is configured and updated, how the browser is visiting the site configured to interact with the site -- that vulnerabilities to some extent become a function of complexity. They may affect some subset of users -- people who use one browser over another, say. When it's difficult to even recreate the set of variables that comprise a vulnerability, it's hard to responsibly disclose that vulnerability.

"In some ways," RSnake says, "there is no hope. I'm not comfortable telling companies that I know how to protect them from this."

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.
Follow our new CSO Australia LinkedIn
Follow our new social and we'll keep you in the loop for exclusive events and all things security!
CSO WANTED
Have an opinion on security? Want to have your articles published on CSO? Please contact CSO Content Manager for our guidelines.

More about ABC NetworksACTAltavistaCarnegie Mellon University AustraliaCBS CorporationCERT AustraliaCiscoCreativeFBIGatewayGoogleHISIBM AustraliaIETFInternet Engineering Task ForceISS GroupMcAfee AustraliaMellonMessengerMicrosoftMozillaNetcraftNikePayPalPetcoPromiseSecure ComputingSpeedUSCVIAWarner Bros

Show Comments

Featured Whitepapers

Editor's Recommendations

Brand Page

Stories by Scott Berinato

Latest Videos

More videos

Blog Posts