Let’s start where conversations about software usually end: basically, software sucks. In fact, if software were an office building, it would be built by a thousand carpenters, electricians and plumbers. Without architects. Or blueprints. It would look spectacular, but inside, the lifts would fail regularly. Thieves would have unfettered access through open vents at street level. Tenants would need consultants to move in. They would discover that the doors unlock whenever someone brews a pot of coffee. The builders would provide a repair kit and promise that such idiosyncrasies would not exist in the next skyscraper they build (which, by the way, tenants will be forced to move into).
Strangely, the tenants would be OK with all this. They’d tolerate the costs and the oddly comforting rhythm of failure and repair that came to dominate their lives. If someone asked: “Why do we put up with this building?” shoulders would be shrugged, hands tossed and sighs heaved. “That’s just how it is. Basically, buildings suck.”
The absurdity of this is the point, and it’s universal, because the software industry is strangely irrational and antithetical to common sense. It is perhaps the first industry ever in which shoddiness is not anathema — it’s simply expected. In many ways, shoddiness is the goal. “Don’t worry, be crappy,” Guy Kawasaki wrote in 2000 in his book, Rules for Revolutionaries: The Capitalist Manifesto for Creating and Marketing New Products and Services. “Revolutionary means you ship and then test,” he writes. “Lots of things made the first Mac in 1984 a piece of crap — but it was a revolutionary piece of crap.”
The only thing more shocking than the fact that Kawasaki’s iconoclasm passes as wisdom is that executives have spent billions of dollars endorsing it. They’ve invested — and reinvested — in software built to be revolutionary and not necessarily good. And when those products fail, or break, or allow bad guys in, the blame finds its way everywhere except to where it should go: on flawed products and the vendors that create them.
“We’ve developed a culture in which we don’t expect software to work well, where it’s OK for the marketplace to pay to serve as beta testers for software,” says Steve Cross, director and CEO of the Software Engineering Institute (SEI) at Carnegie Mellon University. “We just don’t apply the same demands that we do from other engineered artefacts. We pay for Windows the same as we would a toaster, and we expect the toaster to work every time. But if Windows crashes, well, that’s just how it is.”
Application security — until now an oxymoron of the highest order, like the US appellation “jumbo shrimp” — is why we’re starting here, where we usually end. Because it’s finally changing.
A complex set of factors is conspiring to create a cultural shift away from the defeatist tolerance of “that’s just how it is” toward a new era of empowerment. Not only can software get better, it must get better, say executives. They wonder, Why is software so insecure? and then, What are we doing about it?
In fact, there’s good news when it comes to application security, but it’s not the good news you might expect. In fact, application security is changing for the better in a far more fundamental and profound way. Observers invoke the automotive industry’s quality wake-up call in the 70s. One security expert summed up the quiet revolution with a giddy, “It’s happening. It’s finally happening.”
Even Kawasaki seems to be changing his rules. He says security is a migraine headache that has to be solved. “Don’t tell me how to make my Web site cooler,” he says. “Tell me how I can make it secure.”
“Don’t worry, be crappy” has evolved into “Don’t be crappy.” Software that doesn’t suck. What a revolutionary concept.
Why Is Software So Insecure?
Software applications lack viable security because, at first, they didn’t need it. “I graduated in computer science and learned nothing about security,” says Chris Wysopal, technical director at security consultancy @Stake. “Program isolation was your security.”
The code-writing trade grew up during an era when only two things mattered: features and deadlines. Get the software to do something, and do it as fast as possible. Cyra Richardson, a developer at Microsoft for 12 years, has written code for most of the company’s major pieces of software, including Windows 3.1. “The measure of a great app then was that you did the most with the fewest resources” — memory, lines of code, development hours, she says. So no one built secure applications, but no one asked for them either. Windows 3.1 was “a program made up almost entirely of customers’ grassroots demands for features to be delivered as soon as possible”, Richardson recalls.
Networking changed all that. It allowed someone to hack away at your software from somewhere else, mostly undetected. But it also meant that more people were using computers, so there was more demand for software. That led to more competition. Software vendors coded frantically — under the insecure pedagogy — to outwit competitors with more features sooner. That led to what one software developer called “featureitis” — inflammation of the features.
Now, features make software do something, but they don’t stop it from unwittingly doing something else at the same time. E-mail attachments, for example, are a feature. But e-mail attachments help spread viruses. That is an unintended consequence — and the more features, the more unintended consequences.
As networking spread and featureitis took hold, some systems were compromised. The worst case was in 1988 when a graduate student at Cornell University set off a worm on the ARPAnet that replicated itself to 6000 hosts and brought down the network. At the time, events like that were the exception.
By 1996, the Internet supported 16 million hosts. Application security — or, more specifically, the lack of it — turned exponentially worse. The Internet was a joke in terms of security, easily compromised by dedicated attackers. Teenagers were cracking anything they wanted to: NASA, the Pentagon, the Mexican finance ministry. The odd part is, while the world changed, software development did not. It stuck to its features-deadlines culture despite the security problem.
Even today, the software development methodologies most commonly used still cater to deadlines and features, and not security. “We have a really smart senior business manager here who controls a large chunk of this corporation but hasn’t a clue what’s necessary for security,” says an information security officer at one of the largest financial institutions in the world. “She looks at security as: ‘Will it cost me customers if I do it?’ She concludes that requiring complicated, alphanumeric passwords means losing 12 per cent of our customers. So she says no way.”
Software development has been able to maintain its old-school, insecure approach because the technology industry adopted a less-than-ideal fix for the problem: security applications, a multibillion-dollar industry’s worth of new code to layer on top of programs that remain foundationally insecure. But there’s an important subtlety. Security features don’t improve application security. They simply guard insecure code and, once bypassed, can allow access to the entire enterprise.
That’s triage, not surgery. In other words, the industry has put locks on the doors but not on the loading dock out back. Instead of securing networking protocols, firewalls are thrown up. Instead of building e-mail programs that defeat viruses, antivirus software is slapped on.
When the first major wave of Internet attacks hit in early 2000, security software was the saviour, brought in at any expense to mitigate the problem. But attacks kept coming, and more recently, security software has lost much of its original appeal. That — combined with a bad US economy, a new focus on security, pending regulation that focuses on securing information and sheer fatigue from the constant barrage of attacks — spurred CIOs and CSOs to think differently about how to fix the security problem.
In addition, a bevy of new research was published that proves there is an ROI for vendors and users in building more secure code. Plus, a new class of software tools was developed to automatically ferret out the most gratuitous software flaws.
Put it all together, and you get — ta da! — change. And not just change, but profound change. In technology, change usually means more features, more innovation, more services and more enhancements. In any event, it’s the vendor defining the change. This time, the buyers are foisting on vendors a better kind of change. They’re forcing vendors to go back and fix the software that was built poorly in the first place. The suddenly efficacious corporate software consumer is holding vendors accountable. He is creating contractual liability and pushing legislation. He is threatening to take his budget elsewhere if the code doesn’t tighten up. And it’s not just empty rhetoric.
Mary Ann Davidson, CSO at Oracle, claims that now “no one is asking for features; they want information assurance. They’re asking us how we secure our code.” Adds Scott Charney, chief security strategist at Microsoft, “Suddenly, executives are saying: We’re no longer just generically concerned about security.”
So What Are We Doing About It?
Specifically, all this concern has led to the empowerment of everyone who uses software, and now they’re pushing for some real application security. Here are the reasons why.
Vendors have no excuse for not fixing their software because it’s not technically difficult to do. For anyone who bothers to look, the numbers are overwhelming: 90 per cent of hackers tend to target known flaws in software. And 95 per cent of those attacks, according to SEI’s Cross, among other experts, exploit one of only seven types of flaws (see “Common Vulnerabilities”, right). So if you can take care of the most common types of flaws in a piece of software, you can stop the lion’s share of those attacks. In fact, if you eliminate the most common security hole of all — the dreaded buffer overflow — Cross says you’ll scotch nearly 60 per cent of the problem right there.
“It frustrates me,” says Cross. “It was kind of chilling when we realised half-a-dozen vulnerabilities were causing most of the problems. And it’s not complex stuff either. You can teach any freshman compsci student to do it. If the public understood that, there would be an outcry.”
SEI and others such as @Stake are shining a light on these startling facts (and making money in doing so). It has started to have an effect. Wysopal at @Stake says he’s seeing more empowered and proactive customers, and in turn, vendors are desperately seeking ways to keep those empowered customers.
“It’s been a big change,” he says. “We still get a lot of [customers saying]: ‘We’re shipping in a week. Could you look at the app and make sure it’s secure?’ But we’re seeing more clients sooner in the development process. Security always was the thing that delayed shipment, but they’ve started to see the benefits — better communication between developers, creating more robust applications that have fewer failures. The truth is, it doesn’t take that much longer to write a line of code that doesn’t have a buffer overflow than one that does. It’s just building awareness into the process so that, eventually, your developers simply don’t write buffers with unbounded strings.”
In fact, it’s a little more complicated than that. Even if, starting tomorrow, no new programs contained buffer overflows (and, of course, it will take years of training and development to minimise buffer overflows), there’s billions of lines of legacy code out there containing 300 variations on the buffer-overflow theme. What’s more, in a program with millions of lines of code, there are thousands of instances of buffer overflows. They are needles in a binary haystack.
Fortunately, some enterprising companies have built tools that automate the process of finding the buffers and fixing the software. The class of tool is called secure scanning or application scanning, and the effect of such tools could be profound. They will allow CIOs and CSOs to, basically, audit software. They’ve already become part of the security auditing process, and there’s nothing to stop them from becoming part of the application sales process too. Wysopal tells the story of a CSO who brought him a firewall for vulnerability testing and scanning. When a host of serious flaws were found, the customer literally sent the product back to the vendor and, in so many words, said, If you want us to buy this, fix these vulnerabilities. To preserve the sale, the vendor fixed the firewall.
Strong contracts are making software better for everyone. According to @Stake research, vendors should realise that there’s an ROI in designing security into software earlier rather than later. But Wysopal believes that’s not necessarily the only motivation for companies to improve their code’s safety. “I think they also see the liability coming,” he says. “I think they see the big companies building it into contracts.”
A contract GE signed with software vendor General Magic Incorporated earlier this year has security officers and experts giddy and encouraged by its language (see “Put It in Writing”, page 92). In essence it holds General Magic fully accountable for security flaws and dictates that the vendor pay for fixing the flaws.
General Magic officials say they weren’t surprised by the language in the contract, but many experts say the company has to be pretty confident in its products to sign off. The effect of the contract, though, is to improve software in general. The vendor must make secure applications — or fix them so they’re secure — to conform to its contract with a customer, but that makes the software better for everyone.
Clout is not limited to the Fortune 500. Sure, it’s easy for GE to write such a contract, given that GE is part of the Fortune 2. And there’s nothing wrong with CIOs benefiting from GE’s clout — the corporate equivalent of drafting in auto racing.
But there are other ways to force the issue with vendors for CIOs at companies smaller than GE (which is everyone but Wal-Mart). One can join the Sustainable Computing Consortium at Carnegie Mellon University, and the Internet Security Alliance, formed under the Electronic Industry Alliance. The interest groups help companies of all sizes band together on standardising contract language and best practices for software development.
Some are taking satisfaction in a good old-fashioned boycott, even if they are so small as to escape the vendor’s notice. Newnham College at the University of Cambridge in England, with 700 users, recently banned Microsoft’s Outlook from use on campus because of the virus problem.
Much of the clout CIOs gain will come from the market evolving. In a sense, the software makers create clout for CIOs by asking them to deploy the product for ever more critical business tasks. At some point, the potential damage an insecure product could inflict will dictate whether it will be purchased.
“Two years ago, the marketing strategy was to just get it out there. And some of the stuff that went out was really insecure,” says the anonymous information security officer at a large financial institution. “But now, we just say, applications don’t go live without security. It’s a sledgehammer.”
And it’s not a randomly wielded one either. His company has created a formal process to assess vendors’ applications and his own company’s software development as well. It includes auditing and penetration testing, and the vendors’ conforming to overarching security criterion, such as eliminating buffer overflows and so forth. It’s not unusual, the security officer says, for his group to spend $US40,000 per quarter testing and breaking a single application.
“Customers are vetting us,” says Oracle’s Davidson. “Not just kicking the tyres, but they’re asking how we handle vulnerabilities. Where is our code stored? Do we do regression testing? What are our secure coding standards? It’s impressive, but it’s also just plain necessary.
“They have to be demanding. If customers don’t make security a basic criteria, they lose their right to complain in a lot of ways when things go bad,” she says.
At the bank, the security officer says, is a running list of vendors that are “certified” — that is, they’ve successfully met the application security criteria by going through the formal process. The list is an incentive for vendors to clean up their code, because if they’re certified, they have an advantage over those that aren’t the next time they want to sell software. Vendors, he says, “have either gone broke trying to satisfy our criteria, or they run through the operation pretty well. A few see what we demand and just run away. But there doesn’t seem to be any middle ground.”
In the US, the government is taking an active role. The image of the government in security is that of a clumsy organisation tripping over its own red tape. But right now, at least in terms of application security, the US government is a driving force, and its efforts to improve software are making a joke of the private sector.
In fact, no industry has been more effective in the past year at pushing vendors into security or using its clout (often, that comes in the form of regulation) to effect change.
At the state level, legislatures have collectively ignored the Uniform Computer Information Transactions Act (UCITA), a complex law that would in part reduce liability for software vendors (most major vendors have backed UCITA).
Federally, money has poured into the complex skein of agencies dealing with critical infrastructure protection, which has taken on a life of its own since September 11. Equally important but not as well publicised, the feds fully implemented in July the National Security Telecommunications Information Systems Security Policy No. 11, called NSTISSP (pronounced nissTISSip), after a two-year phase-in. The policy dictates that all software that’s in some way used in a national security setting must pass independent security audits before the government will purchase it.
The US government has for more than a decade tried to implement such a policy, but it has been put off. Vendors have routinely been able to receive waivers through loopholes in order to avoid the process. The July move is considered a line in the sand. With national security on everyone’s mind, experts believe waivers will be harder to come by. The US Navy is telling kvetching vendors to use NSTISSP No. 11 as a way to gain a competitive advantage. At any rate, products will have to be secured, or the government won’t buy them. Like GE’s contract, this makes software better for everyone.
The ability of the public sector to whip vendors into shape on application security is best represented, though, by John Gilligan, CIO of the US Air Force, who in March told Microsoft to make better products or he’ll take his $US6 billion budget elsewhere. It was a challenge by proxy to all software vendors. At the time, Gilligan said he was “approaching the point where we’re spending more money to find patches and fix vulnerabilities than we paid for the software”. And he wasn’t shy about labelling software security a “national security issue”.
Microsoft chief security strategist Charney called himself a “nudge and a pest by nature”, and he may have found his counterpart in Gilligan, who in addition to mobilising the Air Force is encouraging other federal agencies to use similar tactics. Gilligan says he was encouraged by Bill Gates’ notorious “Trustworthy Computing” memo — his mea culpa proclamation in January that Microsoft software must get more secure — but that “the key will be, what’s the follow-through?”
Gilligan is right, and clever, to invoke patches as a major part of his problem. If a vendor is not convinced that securing applications is a good idea after getting proof of an ROI from securing applications early, or after gaining the favour of large customers by submitting to a certification process or to a contract with strong language, then patches might do the trick.
Patches are like ridiculously complex tourniquets. They are the terrible price everyone — vendors and CIOs alike — pays for 30 years of insecure application development. And they are expensive. Davidson at Oracle estimates that one patch the company released cost Oracle $US1 million. Charney won’t estimate. But what’s clear is that the economics of patching is quickly getting out of hand, and the vendors appear to be motivated to ameliorate the problem.
At Microsoft, it starts with security training, required for all Microsoft programmers as a result of Gates’ memo. Michael Howard, coauthor of Writing Secure Code, and Steve Lipner, manager of Microsoft’s security centre (Patch Central), are running the effort to make Microsoft software more secure.
The training establishes new processes (coding through defence in depth, that is, writing your piece of code as if everything around your code will fail). It sets new rules (security goals now go in requirements documents at Microsoft; insecure drivers are summarily removed from programs, a practice that Richardson says would have been heresy not long ago). And it creates a framework for introducing Microsoft teams to the concept of managed code (essentially, reusable code that comes with guarantees about its integrity).
A year and several hundred million dollars later, it’s still not clear if the two-day security training for Microsoft’s developers is giving them a fish, or teaching them to fish. Richardson seems to believe the latter. She says the training starts with “religion, apple pie and how-we-have-to-save-America speeches”. And, she says, it includes at least one tough lesson: “You can’t design secure code by accident. You can’t just start designing and think: ‘Oh, I’ll make this secure now.’ You have to change the ethos of your design and development process. To me, the change has been dramatic and instant.”
To Microsoft customers, it’s a more muted reaction. Since Gates’ proclamation, gaping security holes have been found in Internet Information Server 5.0, reminding the world that legacy code will live on. Even the company’s gaming console, Xbox, was cracked — indicating the pervasiveness of the insecure development ethos and how hard it will be to change.
Microsoft also faces an extremely sceptical community of CIOs and security watchdogs. Don O’Neill, executive vice president for the Centre for National Software Studies, says: “When it comes to trustworthy software products, Microsoft has forfeited the right to look us in the face.”
So let’s end where conversations about application security usually begin: Microsoft.
Richardson’s reaction to Gates’ memo was not much different than anyone else’s. “I wondered how much of this was a marketing issue compared with a real consumer issue,” she says.
The memo has become a reference point in the evolution of application security — the event cited as the start of the current sea change. In truth, the tides were turning for a year or more, and if a date must be given, it would be September 18, 2001, one week after 9-11 and the day that the Nimda virus hit. Microsoft’s entering the fray — as it did with the Internet in 1995, also via a memo — is more an indication that the latecomers have arrived, a sort of cultural quorum call.
It was: “We’re all here so let’s get started”, the beginning of the era of application security as a real discipline, and not an oxymoron. v
SIDEBAR: Common Vulnerabilities
Experts say the following common problems in software code, which programmers haven’t bothered to mitigate, account for the vast majority of vulnerabilities. The good news: most of these are easily fixed if they’re found.
Buffer overflows. If a programmer doesn’t tell a program to limit the amount of data that can go into an input field, a malfeasant can stuff that field with tons of data, flooding other parts of memory and letting the bad guy take control of the system.
Format string vulnerabilities. Format strings are what tell, say, a printer how to present letters and numbers on a page. If a user inputs rogue code into the format string, they can take control of the computer, in a similar way to buffer overflows.
Canonicalisation issues. An attacker can bypass security checks simply by knowing that when Y program handles X program’s data, it doesn’t do the same security check.
Inadequate privilege checking. Someone can slip in unchecked if a program doesn’t ask for authentication at every doorway to features.
Script injection. If a programmer fails to strip out the capability to run script, attackers can enter and run it. For example, attackers could enter commands into a SQL database query that allows them to execute commands on the system.
Information leakage. Because of poor design, some programs expose their own playbooks — directory structures, configuration information, IP addresses, passwords — to attackers who know where to look for such information.
Error handling. A subset of information leakage, sometimes the way a program handles an error exposes information an attacker can use. For example, an e-mail bounces back and the error message might contain IP addresses, server names, or even type of server that let the attacker know how and where to hack. SOURCE: @STAKE, CSO
SIDEBAR: Put It in Writing
THIS IS FROM A CONTRACT between GE and software vendor General Magic Incorporated (GMI), from earlier this year, which, experts say, represents some of the strongest language to date that software users have crafted to hold software vendors accountable for the quality of their code. It also creates clout-by-proxy: if General Magic has to make sure the code conforms for GE, it will conform for all users of the product.
7.3 Code Integrity Warranty. GMI warrants and represents that the GMI software, other than the key software, does not and will not contain any program routine, device, code or instructions (including any code or instructions provided by third parties) or other undisclosed feature, including, without limitation, a time bomb, virus, software lock, drop-dead device, malicious logic, worm, Trojan horse, bug, error, defect or trap door (including year 2000), that is capable of accessing, modifying, deleting, damaging, disabling, deactivating, interfering with or otherwise harming the GMI software, any computers, networks, data or other electronically stored information, or computer programs or systems (collectively, “disabling procedures”). Such representation and warranty applies regardless of whether such disabling procedures are authorised by GMI to be included in the GMI software. If GMI incorporates into the GMI software programs or routines supplied by other vendors, licensors or contractors (other than the key software), GMI shall obtain comparable warranties from such providers or GMI shall take appropriate action to ensure that such programs or routines are free of disabling procedures. Notwithstanding any other limitations in this agreement, GMI agrees to notify GE immediately upon discovery of any disabling procedures that are or may be included in the GMI software, and, if disabling procedures are discovered or reasonably suspected to be present in the GMI software, GMI, as its entire liability and GE’s sole and exclusive remedy for the breach of the warranty in this section 7.3, agrees to take action immediately, at its own expense, to identify and eradicate (or to equip GE to identify and eradicate) such disabling procedures and carry out any recovery necessary to remedy any impact of such disabling procedures. Source: FreeEdgar.com/SEC