In 2014, several successful malicious attacks against large financial services, government and private sector firms gave a clear indication of the changes occurring in the network security industry. The recent Ponemon Institute Cost of a Data Breach study found the average cost of a data breach to be $3.5 million with average cost per compromised record more than $145.
Further, Akamai's State of the Internet Report for Security, Q4, 2014, also indicates a rise in attacks with a 90 per cent increase in Distributed Denial of Service (DDoS) attacks and 121 per cent increase in infrastructure layer attacks over the previous quarter.
Despite having significant security measures in place, organisations are still falling victim to cyber-attacks. While these organisations all had the traditional, on premise, network security safeguards in place, they still lost sensitive intellectual property.
Unfortunately, these attacks proved that reliance on traditional methodologies is not enough to stop the modern threat. Reactive mechanisms do provide a layer of security, however knowing what threats lurk on the Internet and protecting critical web infrastructure proactively from those threats can be invaluable.
Challenges in detecting threats flying under the radar
Protecting against attacks armed with advanced malicious threat technologies requires much stronger threat prevention techniques than just legacy systems that do not offer scale and impact performance. It requires an intelligence-based structure that aggregates and correlates information from a variety of unified threat management sources. A unified platform that can analyse user behaviour with internal data and external sources in order to determine if users on a network are doing their job or something more nefarious is needed. This consequently, presents a set of challenges to organisations:
- Limited data sources: Companies simply do not have data sources that can capture data from across the globe. An IP, for example can be the source of malicious traffic on the other side of the globe and simply go unnoticed because organisations do not have the ability to capture and flag that address
- Constraints in analysing large datasets in near real time: While Big Data and analytic platforms for large data have been around for a while, organisations largely, are yet to come to terms with applying this to web protection. The reason is predominantly due to large investments that are needed to do this.
- Lack of heuristics engines: The application of heuristics has been prevalent in endpoint systems but their use in proactive web defence mechanisms is relatively limited
- Scarce expertise: Qualified security expertise is hard to come by and expensive to employ. This is a critical gap in security postures today. Once a threat is identified, the ability to create/push rules that plug vulnerabilities is critical, but most often, very expensive.
Client Reputation and Proactive Defence Strategies
Client Reputation technologies better protect applications and web infrastructure against DDoS and application layer attacks. This is achieved by identifying and sharing with organisations the likelihood that particular IP Addresses fall into one of the following "malicious" categories: web attackers, Denial of Service (DoS) attackers and scanning tools. Client reputation technologies leverage advanced algorithms to compute a risk score based on prior behaviour as observed on a massively distributed network. The algorithms use both legitimate and attack traffic to profile the behaviour of attacks, clients and applications. Based on this information, one can assign risk scores to each IP Address and allow organisations to choose which actions they wish to have their traditional defences perform an IP Address with specific risk scores.
Should organisations pay heed?
The answer lies in understanding that a multilayered defence is key and such technologies add another layer of protection that complements existing defences. These technologies also provide better input to critical security decisions. Such services also fill an important gap in defence postures - the forecasting of intent before exploitation.
Overall, there are a plethora of technologies available, each filling out a niche area and a specific need. Client reputational services gives organisations the ability to forecast a threat before being exploited, which is needed in order to maintain business continuity and minimise the impact of cyber threats.
- How the Internet of Things is reshaping the future of security
- Week in review: RSA conference wrap; Brandis wants private-public security alliance
- eBay’s Magento pushes patch after credit card threat
- Can funding open source bug bounties save Europe from mass-surveillance?
- Australia is the world's second most-attacked Web target: Akamai
- A move towards cyber resilience in a world without borders
- Attackers clobber Telegram messaging app in Australia, APAC
- Are We Still Talking About Big Data?