The arms race between competing artificial intelligence technologies will ultimately decide how we address our cybersecurity challenges.
Decision makers need to understand the underlying technology of the artificial intelligence solutions if they’re to build the right strategy to cut their cybersecurity risk.
The use of artificial intelligence and machine learning systems is increasing rapidly. ‘Machine learning’ describes systems that can learn the correct response simply by analysing lots of sample input data, without having to be explicitly programmed to perform specific tasks. Perhaps the most successful and widespread technique is the use of artificial neural networks (ANNs).
ANNs emulate the way that neurons function in biological systems such as the human brain, creating a network of interconnected artificial neurons. They have proven to be very effective at a number of tasks, especially those involving pattern recognition, such as computer vision, speech recognition or medical diagnosis from symptoms or scans.
Not surprisingly, ANNs are finding applications to challenges created by the modern world, such as cybersecurity. AI scales, so one algorithm could be replicated to deliver the productivity of many workers – think of it as being able to clone your best employee. Why is this important? Because the threat from cyber attacks is also scaling at the same rate.
Perhaps the most-used tool in the cybercriminal’s toolbox is the DDoS, or distributed denial of service, which is little more than a data hosepipe being pointed at a particular server (or service). Now imagine this deluge scaled up and directed at entire corporations, countries or even continents. The only realistic way to defend against an automated attack is to use an automated defence, and that defence is AI.
The sheer volume of network traffic is only part of the challenge in this scenario; the other is the fact that the traffic is typically encrypted. However, AI can learn to identify patterns, even in encrypted packets, that could point to malicious or unusual payloads inside the traffic, at line speed. This ‘fight fire with fire’ approach to applying cybersecurity will be a battlefield for AI-empowered systems that will be carried out all day, every day in the near future, but with every packet inspected the neural networks will learn to defend better.
For the past few decades, neural networks have largely been implemented in software, operating as a model, executed on general-purpose processors. The software emulates the way that each individual neuron functions, as well as the interconnections between them that govern their collective behaviour. This is fine if you want to run a large-scale neural processing job on data that has been collected and uploaded to one of the major cloud platforms or a datacentre full of servers, but many real-world applications call for processing to be handled at the point of action, meaning that it has to be portable, or at least not require a rack full of servers to function.
The problem is that many small-scale devices, such as smartphones, simply don’t have the compute power or memory space to operate neural nets of the size and complexity that would be required for many tasks. For this reason, applications such as Apple’s Siri virtual assistant typically upload speech to the cloud for processing, for example.
Neuromorphic computing, which goes back to the roots of neural nets and tries to more closely simulate the way that biological neurons function, is a different approach to the problem. Existing neural nets have evolved into complex structures with many specialised layers that have developed beyond anything that exists in nature. However, the artificial neurons themselves typically have a constant value as output, which is a departure from what happens in the biological world; it is truly artificial.
One of the most promising neuromorphic computing approaches uses a new type of neural model known as a spiking neural network (SNN), which more closely mimics its biological counterpart. In an SNN, neurons communicate through a series of spikes, with information being encoded not just in the rate of firing of the spikes, but also in their precise timing.
It has a rapid, real-time learning method. It is highly accurate - in casino environments it has recognition rates as high as 99.76 per cent. It can be used on a wide variety of images, including photography or footage rendered poor quality by dim lighting or low resolution. And finally, its power consumption is small, because it works with threshold logic, not mathematical functions.
This has greatly attracted law enforcement agencies. When needing to analyse live video streams - where massive datasets and thousands of reference images and footage are not available - a police department needs technology that can still recognise patterns, and can do it immediately, not be hampered by weeks of training and learning that would be required by a convolution neural network system.
Such neuromorphic processors could lead to a new world of mobile devices and sensors able to operate intelligently and independently, without requiring mains power or a network connection to the cloud to provide their computational capabilities.
As the threats to our infrastructure evolve, it stands to reason that cybersecurity decision makers will increasingly opt for SNNs to deal with the threats at the coalface, at the point of action.
Robert Beachler, Senior Vice President for Marketing and Business Development of ASX-listed artificial intelligence company BrainChip