White hats and black boxes
- 14 June, 2007 15:17
Jeremiah Grossman wants you to know that firewalls and SSL encryption won't prevent a hacker from breaking into your e-commerce website, compromising your customers' data and possibly stealing your money. That's because most website attacks these days exploit bugs in the Web application itself, rather than in the operating system on which the application is running.
Grossman is the founder and chief technology officer of WhiteHat Security, a Silicon Valley firm that offers an outsourced website vulnerability management service. Using a combination of proprietary scanning and so-called ethical hacking, WhiteHat assesses the security of its clients' websites, looking for exploitable vulnerabilities.
WhiteHat does its scanning without access to the client's source code and from outside the client's firewall using the standard HTTP Web protocol. This approach is sometimes called "black box testing" because the website's contents are opaque to the security assessors. The problem with black box testing, of course, is that it is sure to miss many vulnerabilities and back doors that are hidden in the source code--black box testing can only find vulnerabilities that are visible to someone using your website. But the advantage of this approach is that it precisely mimics how a hacker would most likely conduct his reconnaissance and break-in.
I met Grossman this past February at the RSA Data Security Conference in San Francisco and then had a follow-up meeting with him in early March. What he told me was not all that surprising, but it was tremendously disturbing nonetheless. According to Grossman:
- WhiteHat is able to find significant vulnerabilities in approximately 80 percent of the websites that it analyzes.
- The 20 percent that don't have vulnerabilities are usually just "brochure-ware"--just a website with no active e-commerce application.
- Most C-level executives think that firewalls protect websites against Web-application attacks. (They don't.)
Yahoo's systems were protected by firewalls and other kinds of network isolation approaches. But these technologies don't prevent most attacks aimed at Web applications. Firewalls and isolated networks prevent an attacker on the Internet from interacting with a service. But Web applications, by their very nature, need to be open to anyone on the Internet. If a merchant were to use its firewall to block access to its shopping cart system, then none of the website's users would be able to buy anything!
Built in, not bolted on
Because you can't protect Web applications with a firewall, the only way to protect them is by building the protection into the application itself. This is harder than it sounds, because every Web application has two parts: the part running on your servers and the part that's running inside your customer's Web browser. Adding protection to the Web application means that the developer needs to develop a program where one half doesn't trust the other half. This is a hard idea for most developers to get their heads around.
One common vulnerability on the Web today is something called the predictable identifier vulnerability. What typically happens is part of the website is set up to display information whenever a Web browser asks for a piece of information using a certain identifier. For example, one URL might display a check image when it is provided with an account number and a check number. The developer might think that this doesn't present a problem because anybody who knows the account number and the check number should be entitled to see that check image. The problem is that check numbers are predictable--they are issued sequentially. So just by trying different check numbers, a person who received one check from a payee could access all of the other checks that the payee had written.
The predictable identifier vulnerability has shown up many ways over time. Years ago I remember one website that sent its customers a URL to view their receipts. The URL had a number in it. By incrementing the number, customers could see the receipts of other customers. And just a few months ago, WhiteHat's security engineers discovered a website in which users of the website's free services could access services that were available only to users with paid accounts. Overall, says Grossman, one in four websites that WhiteHat has tested are susceptible to this vulnerability.
Another common vulnerability is the so-called SQL Injection Attack. These vulnerabilities arise when information provided by the website's user isn't properly validated before being used to create a query that's written in the Structured Query Language, the interface language used by most of today's database systems. SQL injection attacks can be devastating to both customer privacy and the integrity of financial information. WhiteHat has found that one in five websites is vulnerable to this kind of attack.
There are two ways to address systematic problems like predictable IDs, SQL injection and cross-site scripting. The first is developer training--teach developers to write bug-free code. Grossman believes this is a best practice, but cautions that it won't have a significant impact for several years to come. The reason is that large companies that have hundreds of developers have a high turnover rate, so they will be playing catch-up for a while to come. Just one bug can render an entire website vulnerable.
The second security approach is to rewrite legacy Web applications with modern developer tools--tools that are less susceptible to these kinds of problems.
From his vantage point at WhiteHat, Grossman has seen several organizations migrate websites from Microsoft's original ASP to ASP.NET. "ASP classic, the first generation of ASP websites, are generally riddled with vulnerabilities," he says. But when these organizations rewrote their applications using ASP.NET, suddenly their applications improved tremendously securitywise. "Same developers, two different frameworks. It wasn't an education problem, it was a technology problem."
The newer platforms are more secure than the old ones because the framework provides native secure libraries and APIs for account management, log-in/log-out, session handling, input validation and so on. It's also important for a company to standardize on a single application development system. That way the company can build up in-house expertise, rather than approaching each new project like a novice.
Other companies have significant problems with process. For example, WhiteHat's scanner will sometimes find a vulnerability the first time the site is scanned but not find it the second time. "Our systems figured they fixed it and closed the ticket." But on a third scan the vulnerability sometimes comes back.
In a case like this, WhiteHat will call the customer. The developers look at their Web servers and say the vulnerability doesn't exist. And indeed, on some scans the vulnerability is there, and on some it isn't!
"We call it vulnerability clapping," explains Grossman. "Many websites have load-balanced systems" behind a single URL. Each of these systems is supposed to be running exactly the same code, but sometimes they aren't. "Some systems will be hot-fixed, and some won't," he says. These bugs are very hard to find because they require customers to examine each of their supposedly "identical" Web servers for differences.
At another company--a financial institution--WhiteHat discovered an easily exploited vulnerability that would have let customers steal money. WhiteHat called up the company and the problem was hot-fixed within 24 hours. But a few months later, the vulnerability came back.
"The developers were working on the next release, set to come out in two to three months. Some developer did not back-port the hot-fix from the production server to the development server. So when the push occurred three months later, they pushed the vulnerability again." Ugh!
I've never been a big fan of penetration testing, but the two hours that I spent talking with Grossman convinced me that it's a necessary part of today's e-commerce websites. Yes, it would be nice to eliminate these well-known bugs with better coding practices. But we live in the real world. It's better to look for the bugs and fix them than to simply cross your fingers and hope that they aren't there.
Simson Garfinkel, CISSP, is researching computer forensics and human cognition at Harvard University.