It takes more than technology to defeat a threat from inside the company. The ongoing WikiLeaks saga, and the associated, repeated unauthorized disclosures of information, is more than an escapade against the government. These leaks dramatically document the exposure that confronts all enterprises from trusted individuals, be they careless or malicious.
Insider threat isn't always or necessarily deliberate; accidental disclosure can lead to dissemination of information into the wrong hands and do harm to a company's bottom line or individuals' careers or reputations. It is human behavior that puts critical information at risk. Both organizational and technological considerations are required in order to address the threat that insiders pose to information security.
The Human Behavior issues
Organizations are at risk because they have both sensitive information and people who have authorized access to it. And, a third element: someone else who wants it. Even assuming that access to sensitive information is adequately protected, organizations are still at risk, because a determined disgruntled --or uninformed-- authorized user can still find ways to steal or lose information.
The challenge is to evolve the layers of information security defenses to reduce that exposure. You will never be able to completely eliminate the risk as there has to be a level of access for people to perform their jobs. Also while technology can be an enabler it will never be able to close all the holes.
[Also read Data exfiltration: How data gets out by Nicholas Percoco]
It is common to say that "people are the weakest link in the security chain." But in reality this means that people are the link for which we have the weakest understanding. As users continue to gain more decision-making autonomy they also bear a greater responsibility and need additional support to mitigate information risks.
In the course of trying to perform their primary role, well-intended employees may and will make security trade-offs that may not aligned with the organization's best interests. That is because employees focus on their primary work tasks; the behavior required by the security-enabling tasks often presents an obstacle to that goal.
Additionally, if allowed, they make these judgments based on their own perception of risks, judgments which can be misaligned with reality. Employees then make cost-benefit computations on their own terms without having all of the facts or authority to assume the risk. As a result, employees may do the wrong thing from an information security standpoint in an attempt to do the right thing from a business and personal standpoint.
Understanding human behavior is critical to maximizing the efficiency and effectiveness of enterprise information protection tools and strategies. This approach will also appeal to both well-meaning users' emotions and their intellect, where you can align security trade-offs, achieving a more favorable security posture for both the organization and its users.
Mitigating the risk of insider threat
An insider attack may well be rare; but the consequences of such an attack on a corporation's data grow in severity as the value of that data grows.
And, in this fully connected world, there are no private tragedies. A growing body of civil law--to say nothing of public sentiment-- demands the public shaming of any enterprise that leaks other people's data. Because data comprises an increasing fraction of total corporate wealth, financial regulations treat data loss events as inherently material; thus, such matters are elevated to the Boardroom level. In order to ensure protection against insider attack, designers of national policy are now proposing to mandate a periodic inspection regime within the officially-designated critical infrastructure.
To be an insider, the individual --the would-be perpetrator-- must already have passed through an access control gate, by definition. Since the perpetrator is already inside the gate, access control is not, nor can it be, a deterrent. An inside perpetrator, to do his or her job, must have authorization credentials congruent with the task they must do. So, they are either trusted individuals or have discovered a way in. In either case, authorization systems are not deterrents to insider threat, though they may bound the downside consequence, to a degree.
The job requirements of some members of staff will entail special authority, simply because keeping the IT plant running will always require interventions that cannot be anticipated, such as when parts fail. Such special authorities may also be available to any internal investigations team that may be in place, and similarly to any internal or independent individuals entrusted and authorized to discover and expose information vulnerabilities.
In other words, there will always be persons in positions particularly capable of being an insider threat; their jobs require those very capabilities. That is not necessarily bad, but it is a reality. The question is how to control this by some means that is not itself subject to the very access control, authorization, and legitimate capabilities of the determined and knowledgeable insider.
The answer is that the operating environment itself must be altered.
Of all possible design goals for any security system, perhaps the most important, the one with the highest value, is "No silent failure." Indeed, as mentioned above, the public shaming of corporations that leak other people's data is becoming de rigueur. If we must alter the operating environment in a manner consistent with preventing the invisible or silent failure of an expert insider attack, the engineering problem is at least well-specified.
The most cost-effective solution to this engineering problem is to instrument the operating environment such that data does not move without that movement being observed. The transition from data-at-rest to data-in-motion always involves the operating environment, and does so in a way that is directly subject to discovery by instrumentation. That that instrumentation is difficult to do without side effects is a given; that that instrumentation -- that event-detection scheme -- implies the existence of a mechanism to receive and act on the detected data events in real time is likewise a given.
For maximum practical utility, the mechanism that receives and acts on the detected data events needs to be adaptive. For example:
the operator may well want to know about data events that do not require intervention, only surveillance;
the operator may want to take actions the logic of which involves not only the data event but other externalities, such as time of day and geo-location.
the operator may need to have different rules when the entire environment is diminished, such as by extreme weather events.
And so forth.
How can the CIO accomplish this? Who are the stakeholders the CIO should partner with to make this type of program successful?
The authorized insider threat will always exist, and application of today's mitigating controls varies widely among companies. The technologic risks continue to increase as more information is digitized, storage medium increases, and new devices (e.g. iPads) and exchange mediums (e.g. social networks) are used. Mitigation is not one answer but a collection of layered security solutions that evolve as the risks evolve.
Individual tools alone will not reduce the risk; but a robust, coordinated integration of tools such as Role Based Access Control (RBAC), access/entitlements certification, data classification, security event monitoring, and data-loss prevention technologies thoughtfully integrated and deployed can have an impact. However, even when 'state of the art' practices and technologies such as RBAC, DLP, and SIEM are used, they are often times not fully deployed or implemented with the necessary depth to sufficiently track and monitor a disgruntled authorized user.
[Also read Data mapping: Domesticating the wild rabbit]
All anomalies behavior is not malicious some is by well-meaning employees just trying to do their job. So a critical part of the defense in depth must be a thoughtful application of policies and procedure integrated with focused training and awareness programs explaining "why" certain behaviors pose a risk is essential. By using tools and techniques that appeal to both users' emotions and their intellect, you can align security trade-offs, achieving a more favorable security for both the organization and its users. This approach will also help identify behavioral- or anomaly-based information security capabilities to detect and prevent data leaks by authorized milieus insiders.
In summary, a CIO needs to look at the insider issue as a holistic view of people, process and technology. The approach needs to encompass focus on both the malicious insider as well as the well-intended employee.
Craig Shumard is Principal at Shumard and Associates, a strategic security consulting company specializing in helping decision makers improve and measure information security solutions. Formerly the Chief Information Security Officer at CIGNA, Shumard has extensive experience in the areas of information security, privacy, and compliance.
Daniel Geer is Chief Scientist Emeritus at Verdasys, provider of Enterprise Information Protection solutions. A computer security analyst and risk management specialist, Geer is recognized for raising awareness and understanding of critical computer and network security issues, and for ground-breaking work on the economics of security.