As we move relentlessly forward into an era of global interconnectivity and a pervasive “internet of things”, the operational and security needs of critical infrastructure have deservedly been put in the spotlight. Certain critical infrastructure sectors have the added challenge of having much longer update cycles, because of the sensitivity to availability and cost issues brought about by change. Once a system is functioning reliably, operators are hesitant to change components or platforms.
This dilemma is most apparent in the energy sector, a “critical path” in the fabric of every national economy. Energy is a sector that contends with meaningful and rapidly evolving security risk to both operations and data, and does so on an aging and frequently vulnerable information infrastructure. Faced with the dual and sometimes conflicting responsibilities of maintaining both operational availability and efficiency of these systems, those tasked with ensuring they are secure are scrambling to meet this challenge.
A potential solution is one not often associated with critical infrastructure; the cloud. More specifically, the class of technologies upon which modern cloud computing has been built; virtualization. Traditionally, the idea of energy sector systems becoming “cloud-enabled” has been met with a combination of skepticism and fear.
Skepticism that such aged systems could leverage cloud technologies without massive investments in time and budget. And outright fear that cloud-enabling the energy sector would open systems to new threats and negatively impact smooth operations.
This thinking needs to change. There is an incredible opportunity for critical infrastructure organisations to learn from a significant technology movement happening in the commercial and government sectors. A growing number of those organisations are leveraging virtualization and cloud to transform security – reducing risk and complexity without having to “rip and replace” physical infrastructure.
Refocusing Security Around Risk and Applications
There are three common financial models used to determine security investment; the insurance model, the entitlements model and the risk model. The insurance model weighs the costs associated with addressing a particular threat or vulnerability proactively against the cost and likelihood of it occurring.
The problem with this model is it doesn't consider opportunity cost, nor provide an holistic view of where investments are being made. The entitlement model is a ‘status quo’ approach; where have we made investments in the past. The problem with this model is that it assumes previous investments were appropriately focused and the world has not changed. The risk model – which focuses investment based on risk, is by far the most effective.
In a risk model, you accommodate the fact that your investment level is whatever it is. The only question is, ”Are we investing our time and money on the efforts that will have the most significant impact on reducing risk?” This is ultimately the fiduciary duty of security personnel and therefore the best model for a critical infrastructure environment.
But a risk model requires us to focus security not on infrastructure (servers, and networks) but on the real risk areas; applications and data. That is ultimately what we need to protect. Looking at security through the lens of our applications allows us to prioritise and focus our investment, determine how to compartmentalise our environment, and dictate how best to align controls. It enables us to create least privilege environments, which removes non-value added risk, reduces the attack surface and greatly improves “signal to noise” for detecting more sophisticated and targeted attacks.
Barriers to Progress
The compelling benefits of an application-focused approach to security are hard to resist, but a frequent barrier to this approach is that most organisations don't know or understand their applications, or how those applications relate to the infrastructure. Further, the dynamic nature of our data centres implies that any answer to those questions is rapidly out of date. To embrace an application-focused approach without the answers to those questions could yield greater complexity and even operational disruption.
The second major barrier relates to the infrastructure itself. Many of these environments are neither operationally nor economically in a position to swap out hardware, operating systems, network components, etc. How can we truly change security posture without changing out these security challenged components?
Cloud Can Be The Key To Risk Reduction
This is where cloud and virtualization become particularly interesting. Virtualization, the fundamental fabric of the cloud, provides an abstraction layer between our existing physical infrastructure and the applications running on top of it. This yields several unique properties which can really transform security.
The first of these unique properties is isolation. The decoupling of applications and physical infrastructure enables highly flexible placement of controls – which opens the door to highly granular isolation boundaries around applications. It also provides an insertion point for controls that can be completely isolated from the application security posture. This solves an age-old problem in security; the tradeoff between context and isolation.
Security in software has all the necessary context to make security decisions. You can see the application, the user, the configuration, etc., but you lack isolation from the vulnerable application and its guest environment. So you can be manipulated or turned off. Security in hardware provides isolation, but lacks all the context. Security in virtualization can have both.
The second property is application visibility. From the virtualization layer we have very unique visibility into the runtime status of applications as well as their provisioned state. It allows us to see the infrastructure through the lens of the application. So once you have isolation of the compute, storage and network interfaces of the application, you can get application focused visibility of the compute, storage and network behaviors/state/posture.
The third unique property is automation. Virtual infrastructure is inherently automatable. That turns out to be incredibly useful for security, enabling security to be blueprinted into the application and follow the application components as they move. That also opens the door for all kinds of automatable remediation such as quarantining, increased logging, full packet capture, etc.
Automation in absence of granular isolation is both complex and fragile due to the fuzzy mixing of boundaries, policies and posture across applications sharing the same failure domains. Changes to any application are likely to precipitate unanticipated consequences, when there are any modifications to inordinately complex policy content at the mixed control points.
Leveraging these three unique properties, we can compartmentalise an application in a micro-segment, and align controls in and around that segment, all without changing the physical infrastructure. It enables us to create a least privilege environment around an application – greatly reducing risk and improving signal to noise. And all this possible in a highly automated environment, which reduces labor requirements and minimise risk from human error.
Critical infrastructure is in dire need of a security transformation. But the realities surrounding the constraints to which these environments must adhere have made meaningful transformation infeasible. Virtualization may well be the key to addressing this enormous risk, in a way that is operationally and physically feasible. So the most interesting question to ask is may not be “how do we secure virtualization”, but rather, “how can do we leverage virtualization to secure”.
Tom Corn is the Senior Vice-President for Security Products for VMware, Inc.