Cisco's John Stewart on the latest security threats … and what enterprises can do to fight back
- — 17 February, 2012 15:49
Freelance writer Susan Perschke recently sat down with Cisco Vice President and Chief Security Officer John N. Stewart for an in-depth discussion of the state of enterprise security.
Stewart leads Cisco's security operations, product security, and government security functions. He is also responsible for overseeing security for Cisco.com, the infrastructure supporting Cisco's $40-plus billion business, WebEx, the collaboration service providing 73 million online meetings per year, among other Cisco functions. Stewart holds a Master of Science degree in computer and information science with honors from Syracuse University, Syracuse, N.Y.
In this wide-ranging discussion, Stewart describes the most troubling threats facing enterprises today, and talks about how companies can protect themselves by deploying what he calls "composite security." He delivers specific advice and gets into areas like mobility, identity management, and the need for companies to begin planning for IPv6.
With network security threats continuing to evolve, what are Cisco customers reporting as their top security challenge?
To give you a little bit of history, I've been in the computer security industry for almost 25 years, and the responsibilities I hold are the traditional corporate information security functions. I also co-chair the products security board and am involved in a significant portion of the way we do government work around the globe, with implementation for military intelligence and public sector customers. So I end up having three views of the security challenges we face. My observations and what we hear most from customers, at least circa 2012, is that their challenges breaking down essentially into a "triad of triads," with one of those aspects most often causing them the most significant fits.
IN THE WORKS: Cisco's project 'Futurama' targets consumerization of IT
The first triad essentially dissects the attacking community, if you will, into three main sets of perpetrators. We have individuals that are working on their own behalf for any number of reasons, trying to get into corporations or businesses, or to affect online services. We have organized groups that, more often than not, are funded, and that could be a traditional crime-based group, it also could be a country, and I'm trying to objectify or abstract those groups as ones that are organized, and well-funded and with purpose. Then there's a third group, that emerged in 2011 that I don't think was easily predictable, and it's this thinking, "I'm not financially motivated, I'm not working as an individual, I'm just going to group together at a moment's notice and have a purpose and attack motive." So I put that in the first part of that tri-graph, which means that we have quite a collision of accelerated threats with different purposes. One could be curiosity, one could be monetary or disruptive, and the third could be just purely "I have a political purpose against you." So all of our customers, for all intents and purposes, are very worried about each of them, and they're not precisely in educated or equal quantities as to which they should go after first, or protect against, or understand next. So that triad is the externalization of those threats and who is doing them.
The second triad -- and this is the one that's giving most of our customers fits -- is that we now have a fairly rapid pace in which three major technology platforms have shifted, essentially within the same two-year period. Those three shifts are collaboration, mobility and virtualization. So our customers share with us that this technology acceleration has now managed to make a significant amount of what they've done very difficult to implement and it morphs into a fear of collaboration. We hear, "I have collaboration information outside my company, it's being exchanged on a regular basis with other companies, people, customers, and so forth, and I'm worried about protecting it." On the mobility side, it's the "bring your own device (BYOD)" challenge. For virtualization, I have a broad definition that happens to include cloud, which to me is just a delivery form for virtual systems and often renders itself as, "What do I do about cloud providers?"But virtualization at large comes down to, "How do I protect against hypervisor attacks? How can I be confident that data center operations are sound?" and so forth. So that's the second triad -- this significant technology shift on three different axes.
The third triad that our customers tell us, and I feel this, too, is for any number of economic reasons, including quality-of-service-delivery-to-your-customers, IT's criticality is high and to the right. It can be the differentiation of your business, it can be the intimacy with your customer, and it can be the public delivery of what you're doing ... you name it, it's now critical. You just can't live without your network, your systems, your data center, etc., because you're fully reliant upon them to run a business. It's not only a great time to be in IT, it's also a very scary time to be in IT. And you find yourself in the position of having an accelerated threat of three different actor types, three different significant technology shifts happening concurrently, and having to deal with systems that better not go down because they are imperative to our lives. So that's what I've observed, and I can't singularly answer what our customers report as their No. 1 security challenge, except for managing that triad of triads.
Client-side and mobile device software security tends to get short-shrift compared to OS and perimeter systems, even though unprotected client and mobile systems could potentially pose a greater threat to network security. Should network managers refocus their priorities to make sure both network and client-side security receive equal attention?
This is highly contextual to what the given operations are, so the best way I would respond to this is that the network itself is a lot like the power grid of yesterday -- if you had power it was great, but if it broke down, nobody was surprised. Over time, you expected power to be up and running, and you changed your perspective about it. Now, with the exception of rural areas, which are prepared to lose power during certain seasonal cycles, you know it's going to be there when you need it.
The networking community did the same thing. During the early days, if the network was up it was astonishing, and we didn't really mind if it went down. Then it became, "I'm largely expecting the network to be up, except at certain times of the day"; and now it's like, "What the heck do you mean, it's down?" So my observation is that you could work all you want on application server systems at the high-order bits, but make sure you're on solid, concrete ground with the network infrastructure. There's a tendency to lean on the concept that, "if it's not broke, don't fix it," when, in fact, nothing could be further from the truth: The network needs care.
I tend to believe that the multiplicity of devices means that we are now moving or shifting toward network-based vs. endpoint-based security solutions. This is true, if for no other reason than the exploding number and heterogeneity of vendors, both on what we use as consumers or employees, and the "personless" devices, such as the touch panels that I have in my house, which are just as capable of sending a DDoS (distributed denial-of-service attack) across my LAN as my computer.
It seems that the attack surfaces are expanding exponentially.
Yes, they are, especially when it comes to the number of vendors. If you're working on an endpoint-based security solution, it's getting more complicated because of its nature. For all intents and purposes, you're facing a situation where you can't manage every vendor because there are just too many of them. And then there's the combination of the vendor plus the operating system. If it was just Solaris, Linux and Microsoft Windows, I think we would have a more manageable scenario, but that's just not the case anymore.
So perhaps the network becomes more critical again because it has a greater force effect, protecting everything that touches it, and it has the highest capability of seeing everything that's on it.
What is the best defense strategy during the inevitable migration to IPv6? How great is the threat posed by older, non-IPv6-aware routers and switches?
The challenge with IPv4-only switch routing platforms is that they can become "blind" and juxtaposed with the point I just made a moment ago, which was that the network itself may be the most logical place to protect now that you can't use the endpoint. If a switch or a router is blind, then it really can't help you much. So IPv6 tunneling inside IPv4 essentially makes it just a hop, and that means it's going to pass along both good and bad, with no real succinct ability to determine what to do. That's where the risk becomes evident to me. Any thoughts on that yourself?
What really struck home for me after I wrote my second article about IPv6, were the comments with people saying essentially, "Since no one's using it (IPv6), and I don't have any v6 equipment yet, and my upstream provider doesn't support v6, why should I be concerned?" But my answer to that is, "So what -- you need to be concerned, because you are still vulnerable to the attacks that can come through v4."
Absolutely. That's an interesting observation because the adoption of v6 is actually gently moving, if not significantly moving up, for a couple of reasons. If you think about the quad-A day we did back in June, it was one of the initial forays toward demonstrating that you can run v6 networks, advertise, etc., but most of the search providers have already enabled v6 in some capacity, especially on phones. And just like my Mac operating system is v6-enabled instantly, even if I'm not using v6 on the network -- default "on."
Exactly. Everybody has it even if they're not aware.
And I would imagine it gets to the point of the economics of it where IT goes to management and management wonders why they need to worry about IPv6 -- it's just one more layer of expense.
True, though to your point just a moment ago, just because you don't think you're using it doesn't mean that no one's using it. Plus, certainly in my mind there is an argument that can be made if you're going to look at the future and embrace it, you better not ignore v6, because it's an inevitability that might take a few years to get ready for, and I'd rather not have to learn it the moment I need it. IPv6 is here, whether we want to acknowledge it or not, and the idea that all v4 thinking applies to v6 is a fallacy. It's probably a 90% Venn diagram, but that's about it, and then you have 10% that's going to be highly unique to IPv6.
There used to be a rather finite space that you could search to reverse trace an attack, and now with IPv6 multiple billions of addresses, that goes away.
Right, and you just hit on a point. Some of the uniqueness of v6 is that you're not going to run sweeps -- you can't run a scan to find your vulnerabilities. So why not learn this now, before it becomes something you need to resolve, but are not ready for. This is how we learned v4, too, and I find it sort of ironic that there's a sense in the community that, "No, we don't really need to learn it because we're not using it." We weren't using v4 either, but we then started using it intra-data center, then intra-LAN, then remote access, and then ubiquitously. I'd rather learn in the beginning, than learn in the middle.
What is the practical advice for organizations? Can you start doing something at the perimeter at least to start capturing malware that might be packaged inside v4?
There are two practical things you can begin to do. It's almost like using DLP (data loss prevention) for this purpose; you're essentially looking for encapsulated packets. Now that's not a precise science yet either, since you're dealing with packaging and encryption, and other obfuscation techniques, and you won't see it. The second thing you can do intra-organizationally is just put up v6-enabled devices and see what "pings" them, not a literal ping, but you know what I mean ... You'll then find out if there's v6 traffic, even though you might not think there is. And that's because you're going to see the v6 devices broadcast and look for neighbors, do router advertisements, do network neighbor address requests, multi-cast sweep data with v6 in it. It just gives you some awareness that, "Hey, I have to think about this," and I think that's very practical.
The last part I would offer that's important is to just play with IPv6, especially now when it's not really, really bad. It's a good time to learn.
With even the most secure defenses vulnerable (example: successful hacking of RSA digital keys), is there a new paradigm for network identity and access?
On the subject of identity access, irrespective of other vendors' woes in the world of ODP (ontology design patterns), my belief is that identity is changing. But it's changing not so much on the ODP "good" or "bad" axes, it's changing on what we call "context." We've talked a little bit about this publicly as a company. It's because we have to wrap up the "who, what, where and how" in order to get what I would call projected identity that's usable versus just the "who." So my belief on this one, is when I can start combining elements that then allow pattern recognition of behavior as a result of trying to identify you, and then feel confident it really is you, then the "who, what, where and how" is context-significant.
And that's how we've gone after it with the Cisco Identity Services Engine and also providers like Ping, which has done federated identity in this respect for cloud services providers and the like. So adding rich data to John Stewart -- "Well it's John Stewart, and he's working apparently at 3 a.m. and the last we knew he was in the Pacific time zone, so what in the world is he doing that for?" -- It might still be me, I might be working late, but it would still be used, versus John Stewart gave me a fixed password, which I trust ubiquitously. So that's where I think we'll be getting to.
The simple observation I would make here is that one-time usernames and passwords, for the moment, procedurally and technologically are still a very strong way to handle identity. They're just not strong enough, because they don't have the context of the "who, what, where and how." So it's not like we have to retool the entire world because one-time passwords have fallen. I think we can just build upon getting beyond just the "who" to, "Where are you accessing it from, what are you trying to do, and then how are you trying to transport traffic?" When you throw all that together, then you can start making much better value decisions as to whether or not to allow access, irrespective of whether the access requested is inside or outside the company.
So, practically speaking, can you handle that at the switch level?
Yes. There's more than one company and product in this space, but this is where we've gone. We're already shipping product that does stuff like this, such as the Cisco Identity Services Engine (ISE) that we launched in May 2011, the Cisco Adaptive Security Appliance (ASA) is essentially our firewall used in a remote access solution, which is adding a lot of the data, and last but not least, is the Cisco AnyConnect VPN Client, which is the client supplicant that allows you to do remote access.
Is defense-in-depth still the best overall security strategy for data center managers?
I agree that defense-in-depth, or what I would call "composite security," is still a very sound strategy. However, there is a piece of this that we don't talk enough about, and I think is elemental, which is situational understanding. What I'm trying to offer up here for debate, is that if you have defense-in-depth, but you don't understand your infrastructure, then you don't really have security.
Too often, just out of speed, time, efficiency, you name it, we end up building things and then we lose track of how they're working properly, so we don't really have that situational understanding. We have all these layers of defense-in-depth, but then the porosity of something we don't understand is where the hole is, and somebody else finds it, because they study us ...
And they actually may end up knowing more about your network than you do.
Yes, exactly. So only when composite security/defense-in-depth and situational understanding are combined, should you really feel confident, because one without the other, I don't believe, is a comprehensive enough strategy.
Another interesting aspect about situational understanding -- one in which we've focused our product and services suite -- is that it's really important to have somebody watching you from the outside. A classic example: I have defense-in-depth and situational understanding, so I know it's working and I know how it's working, but somebody outside my infrastructure watching me is actually noticing that I have all sorts of ports open, or a new system's online, or this thing is generating gigabytes of traffic, or it's become a spam host or a botnet-controlled device, or whatever -- that's equally valuable as part of that situational understanding. Now you know that whatever you've done, it wasn't completely good enough. Sometimes the only way to see that is to have someone looking out for you.
Does Cisco provide that service?
We do, in a format we call "Security Intelligence Operations" or "SIO." It's kind of a two-for-one. Part one is that we're studying the Internet for reputation and making sure we create the "badlands of the Internet" and can automatically block through any number of means -- email, Web, IDS, IPS, firewall -- you going there, but also that area trying to contact or connect to you. Part two, as we work increasingly with the ability to understand traffic via Netflow, which is "free" on every router that we make, it has some really good value equations in situational understanding. Although you can't see payload, you can see traffic in terms of how much and from what IP address to another, and that's where it's really valuable.
IN DEPTH: Inside Cisco global security operations
So from that alone, you can potentially identify that maybe it's not legitimate traffic.
Exactly, but it is traffic, so you have to ask, "Why is this happening?" The part I'm trying to build upon, is that it's just an element of helping you to realize that you're not alone and having to do this all by yourself. You end up having, in my opinion, the need to subscribe to services that are studying an aggregate number of customers, so they can get a priori warning that this is happening as part of a group -- that you're a part of a bot-controlled or spam-generating system.
Considering the business risk posed by security breaches (data and identity theft as examples), should businesses establish a distinct threat operations unit, similar to Cisco's TOC (Threat Operations Center), that reports directly to top management? If this isn't feasible, how can customers best take advantage of the security analyses compiled by their network vendor?
I think that's a great question. Bridging from the prior point, there are essentially two ways to get data. One is an automatic system that provides it, and that's what I was just talking about with SIO. With our analysis of literally terabytes of data per day, SIO enables you to participate in a community of customers that are all essentially helping one another through the enriched study of traffic. That's what we provide, but then there's another way that is more manual. When you have a vulnerability that's been reported, you need to look at the vulnerability plus its implication.
I think it's really helpful for vendors to collaborate with other companies on security related issues. Suppose there's a vulnerability that's reported, for example, by Microsoft and here's how Cisco would say you can use Cisco to help mitigate it, and here's how Microsoft says you can use Microsoft to help mitigate it. That's helpful because it gives you something to do, versus just data, "Hi, you're vulnerable." Well, that's great, but that's not what I need. I need, "What the heck do I do about it? Give me some choices and options." So that's where I think the subscription value of getting the data manually can help.
My observation is that it's easy to get in a rut. Like on Tuesday you patch servers, and on Sunday you do other server maintenance, and nothing seems to be wrong, so there's just a tendency to get into a pattern. And inertia and also the problem obviously of trying to pitch to management that the network is vulnerable and we could be having major problems, when there's no apparent problem -- but your data could in fact be streaming out the door.
You're spot on. Just because you can't see it, doesn't mean it's not happening.
If I were to put sort of a fine point on what I think your question relates to -- I've said, very loudly, "Make sure you do the basics well." That includes being cognizant of potential future problems, but also being very cognizant that you're running the existing operation the best you can, with the true risks well mitigated or fully accepted. Because too often -- and I've even seen my own team do this at times -- you're thinking the latest problem is what you have to work on when the honest truth is that a misconfiguration over the weekend is what's causing your biggest risk. So it's risk plus mitigation equals risk management. So do the basics well, then start going for the higher-order bits, set a strategy and go.
Perschke is co-owner of two IT services firms specializing in web hosting, SaaS (cloud) application development and RDBMS modeling and integration. Susan also has executive responsibility for risk management and network security at her companies' data center. She can be reached at email@example.com.