Policy Preachers

Last September 11, the USA got religion when it came to information security — at least until the smoke cleared. Nevertheless, from their new pulpit in the White House, Richard Clarke and Howard Schmidt are still trying to sell vendors, executives, politicians and ordinary citizens on a vision of a more secure future. And converts don't come easily.

"About half of our job is marketing," admits Clarke, President Bush's cybersecurity adviser and chairman of the president's Critical Infrastructure Protection Board, created last October. Clarke, 51, made his name as President Clinton's counterterrorism adviser for most of the 1990s; vice chair Howard Schmidt, 52, is the former CSO of Microsoft. Together, the two men are information security's most prominent preachers.

These days, when they make newspaper headlines at all, it's for reporting doomsday scenarios about cyberattacks. At worst, their comments seem like needlessly alarmist attempts to get people to care about weaknesses in the nation's financial, telecommunications, transportation systems and other pieces of the critical infrastructure. At best, for CSOs, they're preaching to the choir.

In fact, in a lot of ways, the duo's challenges aren't so different from that of a CSO. Their roles are new, their power is limited, and their future is somewhat uncertain as Homeland Defense undergoes a restructuring. But whereas CSOs are influencing policy, spending and awareness in an organisation or perhaps an industry, Clarke and Schmidt do so for the nation.

CSO went to their offices two blocks west of the White House not to hear their spiel about why the corporate world should care about critical infrastructure protection — you already know that. Instead, we drilled them about how they might use their power to influence everything from a controversial Freedom of Information Act (FOIA) exemption to vendor accountability to procurement by the US federal government. What they had to say may surprise you.

CSO: You've said that the FOIA exemption is the single most important policy change to improve information security. [Editor's note: This controversial exemption would ensure that information given to the federal government about computer attacks would not be made public.] Why is it so important?

Richard Clarke: If you look at the Nimda virus last fall — a major attack that caused billions of dollars worth of losses to the private sector — not one company called us up to tell us they had been attacked because they wanted to be able to keep it secret. They don't want their customers and their stockholders to lose confidence. We understand that. But the result is that we have an inadequate perception of what is going on in the American information infrastructure.

Senator Robert Bennett [R-Utah] probably puts it best. He says, Imagine you are a commander in charge of a battlefield, and you can only know about 15 per cent of what is going on in that battlefield. How could you defend yourself? Well, if you look at our critical infrastructure, about 85 per cent of it is in the private sector, and unless we can have some knowledge as to what's going on there — like attacks, viruses, worms, denial-of-service attacks — then we'll never be able to help defend it. Only by getting a FOIA exemption, narrowly written, will we ever be able to persuade companies that they can trust the government with information about vulnerabilities or hacks.

Is the exemption really necessary?

Clarke: Do you mean, are there already adequate provisions in the law that would exempt such information from a Freedom of Information Act request? Our lawyers say that the law as currently written would allow us to protect that information. But it doesn't matter what our lawyers say. Only by having corporate lawyers say it will companies be persuaded to give us that information. The companies' lawyers believe they need additional protection; therefore, we need to get additional protection.

If the law does pass, will an onslaught of people begin reporting information to you?

Howard Schmidt: It's hard to tell. We think in some cases we'll have companies come forth right away. In other cases there may be some hesitation; the general counsels of the various companies will have to look even deeper to find reasons why they may not be able to share information. There's still the perception that a company's ability to secure itself is a reputational issue, and that's justifiable. I'm sure there will be a little bit of giving of information, seeing how that plays out. I don't think it's suddenly going to open the floodgates.

Are you advocating any kind of tax benefits for spending on security?

Schmidt: Not at all. The cost to recover from a virus attack, a denial-of-service attack or an intrusion escalates considerably [from that of preventive measures]. When the Melissa virus hit at a company that I had some insight into, it took about $US14 million worth of labour effort, reconstitution, to bring that whole system up online after 10 days. [Later, with better processes in place] when Anna Kournikova hit the same company, they were able to contain it within 30 minutes. That 30 minutes translated into about $US12,000 worth of effort — quite a difference from $US14 million. That's why the CFOs are saying, Hmm, it might cost me on the front end to do some risk management, but in the long term, I'm going to save money and reduce total cost of ownership.

As sad as it is to say, it seems like the viruses and worms have actually helped as far as demonstrating that ROI.

Clarke: I think there's a silver lining to some of them because you know when you get hit. Frequently, when people penetrate networks, we don't know it because they're successful at it. They don't leave traces. It's helpful when we have major viruses and worms and denial-of-service attacks because they're noisy and leave fingerprints, and we know it's out there. People are then motivated to fix it. But that's not the case when you have stealthy penetrations that leave back doors, Trojan horses, logic bombs.

What's the administration's position on holding vendors accountable for products that aren't secure? And liability for products that aren't secure?

Clarke: Those are two related but separate issues. One is holding vendors accountable, and one is doing it in court. We are very much in favour of holding vendors accountable. When a product fails, the vendor has a responsibility to quickly identify a way of fixing it and getting that patch out. And the patch not only should fix the problem, it shouldn't interact badly with other widely utilised applications. It does us no good to get a patch that solves the vulnerability but then makes it impossible to use applications from other companies.

It's not terribly valuable to litigate these problems. We'd like to find solutions that are quicker than long, multiyear litigation.

Schmidt: There are two other components. One of those is the market drivers that would induce people to be more careful and more responsive. People want to buy the things for which they have the best support. When you buy a car, if it doesn't work well, you're going to think twice before you buy from that maker the next time.

The second piece is if you look at the identification of what might be wrong with something. After Nimda, an informal survey asked those affected, Why were you affected, when the patches had been out for so long? The number-one answer was, people didn't know that they needed to have the patches installed, which goes back to the accountability to the vendors.

What else is involved with convincing the vendors to create more secure products?

Clarke: The vendors tell us, We could create more secure products, but no one wants them. Then we talk to the procurement people — those in banking, finance, energy, government — and ask, Do you want more secure products? And they say, Yes! but the vendors won't make them. That's the dialogue of the deaf that Howard and I try to bridge. We take the critical infrastructure procurement people and the vendors by the hand and say, Let's agree that we're going to have more secure products. There's actually a real role for us to bring people together to have dialogues that you would think would naturally occur. We also have a role that I call the honeybee role — we fly around flower to flower proliferating the message and sharing information, so that we're able to learn what products are out there. We don't recommend certain kinds of brands, but we do recommend certain kinds of services.

John Gilligan, the CIO of the Air Force, recently threatened to stop using Microsoft products until they became more secure. We've heard similar rumblings from others. How feasible is it to force government agencies to buy only certain products?

Clarke: The federal government tried 20 years ago to only procure IT products that were security-certified. It didn't work because very few of the products could get certified in a timely manner. Exceptions were granted because people could demonstrate that there was no product available. So it became something of a farce.

We're looking at whether we could do it in a smarter way. We don't want to jump headlong into a full-up system of only procuring things that meet certain standards, but we do think there's a role for smart procurement. We think that if there is a product that has been certified under the NIAP [National Information Assurance Partnership] program of the Commerce Department, it ought to be given an advantage.

Under the NIAP, you can bring your product, software or hardware, to a federally approved laboratory for testing, and if it passes, then it's NIAP-certified. It used to be that the federal government did the testing itself, but there were so few people who could do the testing in the federal government that it took a long time. So what we've done now is the federal government certifies private sector laboratories to do the testing, so there are many more places to do the testing, and there have been a few products certified. You can find them on the NIAP webpage. [That program] is about 5 years old. We are looking at whether we can get more products certified and select some key products, and only have the federal government procure certified products in key areas.

Schmidt: We've seen the evolution of attacks against our IT systems. Each generation of products gets better and better at resisting those things, but it still takes time to get these things created, identified, coded, shipped and then out to the public. If we were to say, Turn off the spigot of technology coming into the government, we'd be shooting ourselves in the foot, because the next generation is going to be better than the one that we're currently running, and oftentimes you're running two generations behind to begin with. So we have to look at the balance about what do we need to do to look at the smart procurement, while phasing in a higher level of standard and making sure the product is going to meet our needs today and not have to sit in a static mode for five years while we're waiting for things to catch up — waiting for the approval process, waiting for people to make changes in their product to meet the threats of the day.

And then what about the old adage that you don't know what you don't know. Both of us get asked all the time, What do you see as the next generation of attacks? Well, you don't know what you don't know. It could be something we're not aware of — it takes place down the road. And say, if that does occur, then all the sudden those products that have been certified are no longer valid. So we have to balance all those things into it, and it goes back to — that core thing I mentioned earlier — using the bright people from government, academia and industry all together to figure out how to make this work today as well as in the future.

If you look at the state of critical infrastructure on September 10 versus now, what have the concrete accomplishments been?

Clarke: I think we can point to measurable improvements with the federal government's security in its cyberspace networks. The budget the president sent to Congress in February asks for a 64 per cent increase in funding to defend federal departments and agencies. That's almost 6 per cent of the federal IT budget on IT security. We're trying to do two things with that. Obviously we're trying to fix very serious problems that the federal departments have. But we're also trying to set a model for the private sector, for members of corporate boards of directors, for CEOs. We want them to see that the federal government is spending 6 per cent of its IT budget on IT security and ask, What are we doing at our company? Unfortunately most companies are not going to be able to say that they're spending anywhere near 6 per cent on security.

You quote a report that most companies spend more on coffee than on security. Is 6 per cent a benchmark? A catch-up?

Clarke: It's catch-up for the federal government, and it won't be enough if we don't sustain it or perhaps even raise it over several years. There's no good figure that is appropriate for every company or every institution. That's why we're not saying 6 per cent is the target. We're saying that every CEO and every member of the board of directors should be asking the question, How much is enough for my company?

The federal government's security is sometimes questionable. How much should federal agencies be a role model?

Clarke: We'd like federal agencies to be a role model, and unfortunately with few exceptions they've been a model of how not to do it. That's why President Bush is so committed to fixing that problem. We have legal responsibilities to protect the information in federal departments. There's a lot of information about you and me in computers in federal departments — from our military records to our medical records — so we have an obligation to the American people to protect their information. We also have an obligation to put our money where our policy is. For the first time with President Bush's budget, we're doing that.

How do you measure improvement?

Clarke: There are probably guideposts along the way, but there aren't measures of effectiveness that are more than anecdotal. You can look at the number of computer incidents; you can look at the dollar value of damage done by those incidents. Unfortunately those numbers are skyrocketing. That doesn't mean that we're not making progress. If you look at traditional measures of effectiveness as how many incidents do you have and how bad are they, that would tell you we're getting worse. And we are in some respects getting worse. The number of people who are connected, the number of functions connected to the Internet are going up, and the sophistication of the attack tools as well. At the same time, we're making progress, getting the message out, getting more CEOs to care, getting the hardware and software manufacturers to develop more secure systems.

Schmidt: If you have a metric in which you identify the number of viruses found when you scan systems, is a lower number good or is a higher number good? That's the challenge when you develop metrics like that. If you're not catching many viruses, does it mean they're not there or that they're not affecting you? If you're catching a whole bunch, does it mean you have a system that allows those things to proliferate?

The other challenge is quantifying a negative: How many burglaries have I prevented by having extra police cars on the street? If you don't get broken into, that's a good thing, but was it because you did the right thing, or because they were hitting somebody else at the same time? One of the things Dick and I look at collectively is, is there indeed a metric that we can use to identify when we're getting better, and if so, how can we get that proliferated so that people have a better sense of good, bad or indifferent when it comes to metrics involved with security.

Clarke: Then there's the unknown. Have our enemies penetrated our critical infrastructure successfully and we don't know it? If there's a big conflict between us and them, are they already in a position where they can disable our critical infrastructure? We don't know. I'd be surprised if somebody hadn't tried it.

Who are the enemies?

Clarke: We've stopped asking that question, and I think it's important to stop asking that question. Before September 11, people thought in terms of a threat paradigm: Who are the enemies, and when are they going to do it, and where, and what are they going to do? And they waited for that information before they acted. So, tell me the name of the terrorist group, tell me what airplane they're going to hijack, what city they're going to attack, when this is going to occur, and then I'll do something to prevent it. Well, as we learned on September 11, it's too late frequently. Or you never get the information at all, and the attack just occurs. We're therefore advocating rather than the traditional threat paradigm of who, what, when, where, a vulnerability paradigm that says, Don't worry about who's going to do it, because the person who's going to attack you may not even know it yet. Don't worry about when it's going to occur. Don't worry about where and what they're going to do. Ask yourself what your vulnerabilities are. And then find that intersection between the things that are the most vulnerable and the things that would be the most damaging. It's a shift from who, when and where, to where are my weaknesses, and what are the most important weaknesses that I have?

So it's really self-reflection as opposed to...?

Clarke: As opposed to intelligence collection about the enemy. Because, as Howard says, many of these things take years to fix, and people who are not now actively our enemy may be three or five years from now. If all we do is collect intelligence about people we think are our enemies, we may miss what we should be doing.

I notice the word cyberterrorism has not come up.

Clarke: I don't use it because it tends to cause people to think that the enemy is terrorists, and particularly terrorists groups that they identify and know about like al-Qaida or Hamas. There's a whole spectrum of threats from the joy rider on the Internet that does Web defacements, to the person engaged in extortion, theft, fraud, industrial espionage, national intelligence espionage to information warfare. We have to worry about most of that spectrum, and most of the actors that you find on that spectrum are not people from terrorist groups. The other thing is you wind up not knowing the noise, what is dramatic at this moment or what just merely is a prelude to something that's going to be more dramatic in the future. That's one of the challenges we've always had in tracking these down — do you chase everything that happens, in the event that something will be more dramatic later on, or do you take the really dramatic looking stuff now? The bottom line is you never know. The term that I jokingly use is, until you put the "habeas grabus" on somebody and find out their intent, they could just be another joy rider out there.

Join the newsletter!

Error: Please check your email address.
Show Comments

Featured Whitepapers

Editor's Recommendations

Solution Centres

Stories by Sarah Scalet

Latest Videos

More videos

Blog Posts

Market Place