Our quest to use AI in security is deeply affected by our views and perceptions according to Davi Ottenheimer, the President of flyingpenguin – a security consultancy, designs and assesses risk mitigation. He was formerly the head of information security at Barclays Global Investors, the world’s largest investment fund manager.
Citing the example of driverless cars, Ottenheimer noted that we’ve been promised such vehicles for over half a century. Back in 1960, Robert McNamara developed a driverless system at Ford which was expected to be in production vehicles by the mod 1970s.
But automation was suddenly given a bad rep as a result of the Cold War and the ability for automated systems, unrelated to cars, to launch missiles.
“People are worried about where automation goes,” says Ottenheimer.
Ottenheimer says people want to improve the world using automation within rules. He likens this to the application of Game Theory where there is creativity within bounds in order to become successful.
He says there are many fallacies about AI holding us back. For example, he says trying to shame people for talking about specific technologies is common, demeaning anyone who isn’t an expert or dismissing the views of people outside the core group of experts are key tactics used by naysayers.
Ottenheimer has designed a model of the use of security analytics. He says organisations tend to be either compliance driven or hypothesis driven.
A compliance driven view of security is focussed on being easy, routine and requiring minimal judgement. On the other hand, a hypothesis driven practice works on identifying, storing, evaluating and adapting information in order to get the best outcome from their security analytics.
It’s not a question of being either compliance or hypothesis driven, says Ottenheimer – both positions have merit.
This mirrors comments made by RSA’s Executive Chairman Amit Yoran, saying we require creativity (or being hypothesis driven) to find new ways to combat security threats.
When the hypothesis driven approach is highly developed it becomes possible to use analytics and AI to predict security events before they occur. However, Ottenheimer posits that this can become disconcerting. He likened it to the special talents of Corporal Walter “Radar” O’Reilly in the TV show MASH. Radar knew when choppers with wounded soldiers were arriving at the army hospital before anyone else.
In Ottenheimer’s view, we need a balance between rationality and imagination. Quoting the author of “Education and the good life”, Bertrand Russell, he says
“Without Physiscs, physiology and psychology we can not build the new world. It is only through imagination that we become aware of what the world might be without it, ‘progress’ would become mechanical and trivial”.Read more: Thought Leadership in Privacy
For security professionals, this means being able to articulate security requirements with facts and practical examples, while avoiding theoretical or hypothetical arguments.
With respect to the use of AI, Ottenheimer says we need to educate people so they understand AI is not replicating the human brain’s capacity for thought and reason. In the same way an aeroplane doesn’t have feathers, he says AI systems won’t work or look like brains.
The tools we will use will take into account many factors like financial, social and personal information but they won’t act with complete autonomy. Rather, they will work alongside human operators.
Anthony Caruana attended RSA Conference in San Francisco as a guest of RSA Corporation.Read more: Disaster recovery: The next generation
- Application morphing to deliver endpoint security
- Quantifying risk: Closing the chasm between infosec and cyber insurance
- Most cybersecurity breaches go unreported, uninsured despite executive concern: Barclays
- Security: Architecture vs Sprawl
- AusCERT 2017 - Incident Response for the 21st Century Luddite