Hope is not a security strategy – which is why AI is no longer optional
- 08 September, 2017 12:32
Good information-security personnel have become harder and harder to find – and that is making it harder to mount an effective defence against ballooning volumes of security alerts. And where machine learning-driven artificial intelligence has long been one of many possible solutions to this issue, its rapid advancement is helping it emerge as the only realistic answer in the long term.
That’s a rapid shift for an industry that was still waking up to the real potential of security information and event management (SIEM) just a few years ago. But as attack volumes grew and attackers’ tactics diversified, it became increasingly clear that simply getting better visibility over security activity wasn’t going to be enough for humans that were already struggling to keep up.
“There is a significant shortage of talented, qualified security personnel who have the requisite experience to help move the needle for an organisation,” says Matt Winter, vice president of marketing with security specialist LogRhythm.
One recent estimate projected a global shortfall of 3.5m security specialists by 2021, up from 1m in 2014. Yet that’s not the only problem: high demand and low supply have kept salaries and retention costs sky-high.
“The challenge for most organisations,” Winter said, “is that even if they could find enough people – and if they could afford to hire them all – there would still be more work for them to do than they could manage in a 24-hour day.”
SIEM provided a way to capture network logs and security activities – but without enough skilled humans, even the best SIEM is just documenting the compromise of your organisation. Cisco’s 2017 Annual Cybersecurity Report evaluated the extent of the problem, noting that some 44 percent of surveyed security operations managers said they were getting more than 5000 security alerts per day.
Even well-equipped security organisations, the study concluded, can only investigate 56 percent of the security alerts they receive on a given day. Just half of the investigated alerts are deemed legitimate, and only 46 percent of these are even remediated.
Those figures confirm what most security operations managers already know to be true: that the volume of attacks being actively remediated is well below the volume of known attacks, and even further below the volume of all attacks that are hitting the typical organisation.
“With the amount of information available to people out there, it is so easy in this day and age to socially engineer a compromise,” Winter said, with many of those compromises sneaking smoothly past corporate defences by exploiting weaknesses in security perimeters. “Even the best use of IT, information and resources can’t prevent a compromise.”
The gap between remediated attacks and unknown attacks remains large and problematic – “hope is not a strategy”, Winter notes – and AI has emerged as the only tool capable of closing it.
Progressive security vendors have recognised this trend, which Gartner recently said would triple by 2020 as AI investments in IT resilience increasingly link security protections to business interruption.
LogRhythm, for one, already offers its AI Engine for real-time visibility of risks. Such tools are being further expanded by the delivery of cloud-based AI tools that not only simplify the delivery of data-based security protections – but offer better protection than ever by aggregating baseline and threat activity data in much larger quantities than any one organisation could provide on its own.
“Analytics delivered from the cloud augments security platforms,” Winter explains, “where we can apply deep learning and AI to threat detection in the cloud in a way that is – by virtue of the types of techniques and the compute resources required to do these kinds of analytics – unattainable and unaffordable for most organisations.”
Despite its power, AI shouldn’t be seen as a cure-all for the security woes that organisations face. Human intuition and problem-solving still have important roles to play in the overall security defence – but with a large proportion of security alerts relating to mundane, easily remedied issues, the most successful organisations will be those that tap its power to more rapidly triage new incidents and pass only the most complex issues to humans for intervention.
“AI can help people do more with the limited time and resources they have available to them,” Winter said. “Our goal is to detect the compromise before the breach can occur – and AI can know to automatically take certain steps given the nature of the threat. These automated actions can neutralise a threat in your environment before that threat can cause further damage.”
It’s important to think of AI as being part of a detection-and-response workflow rather than a point solution, he noted – highlighting its importance as a mechanism for focusing security teams’ expertise on the incidents that really matter.
“Large organisations have barbarians at the gate, pounding on the door every day,” Winter said. “You can’t respond to all that stuff, or let that noise distract you from detecting and responding to the really critical threats targeting your environment. The idea is to mitigate threats as early in the kill chain as possible.”