Network Monitoring Past Present and Future: Part Two

by Dick Bussiere, Principal Architect APAC , Tenable Network Security

Vulnerabilities Must be Monitored Too

So far we’ve talked about monitoring for activities both on the network and on the endpoint, both of which are necessary. We’ve not talked about vulnerabilities – which when left to fester introduce pathways that can be compromised. With today’s fusion of traditional networks, cloud and mobile, and our imperfect security infrastructures, the number of paths by which attacks can be launched and vulnerabilities exploited is growing at an alarming rate.

Consider this: in 2014 approximately 8,000 CVEs were created - that works out to more than 153 each week, representing a huge, high risk increase in threat surface if left unmitigated. Clearly, given both the dynamic nature of our infrastructures and the velocity at which vulnerabilities are introduced, vulnerability assessment and monitoring must be treated as a continuous process rather than something you do quarterly or annually. A vulnerability assessment done 30 days ago is already irrelevant.

Three techniques are used to monitor for vulnerabilities. The first uses traditional network probing of assets, where a packet is sent to the target, and the response is analysed to look for indications of vulnerabilities. This type of assessment, called an network scan, is good at discovery of assets, discovery of services and blatant vulnerabilities behind those services. It is also good to look for configuration errors such as default accounts with default passwords. That said, since this type of scan is external to the device it cannot identify major issues lurking within the device. This type of scan also cannot discover assets that are undiscoverable – for example most mobile devices.

The second type of scan uses device credentials to get inside the target. This type of scan can find just about every issue since it has total visibility into the device in question. Such a scan can even identify malware that’s somehow made it around your perimeter – a very important function with mobile computers. Scans such as these can be accomplished with external scan technologies or through agents that can be installed in the endpoints.

The third type of vulnerability “scan” is not a scan in the traditional sense since it does not in any way touch the endpoint. Rather, this new passive scanning technology observes traffic on the wire and identifies client and server side vulnerabilities based on deep packet inspection.

Further, since this passive technology has full visibility into all communications on a given segment, it has the ability to illuminate parts of the infrastructure that historically have not been monitored at all – deep within the LAN – for anomalies. Passive monitoring gives visibility not only into endpoint vulnerabilities, but also gives visibility into what the endpoints are doing and how they are being used. This kind of information, from deep inside your network, is invaluable from a security perspective.

One final perspective on vulnerability assessment relates to human behaviour. According to the 2016 Verizon Data Breach Report, one out of five breaches were caused by “miscellaneous errors.” In fact, 63 percent of miscellaneous breaches were related to human failings such as weak credentials, default passwords, people falling for phishing attacks and so-on. Vulnerability and compliance monitoring gives you the chance to catch these human failings. For example, you can identify misconfigurations, weak configurations, weak passwords and so-on, things that humans are responsible for, that could compromise your security.

Monitoring for Unknown & Shadow Assets

The scanning activities mentioned above inherently perform another critical monitoring function – asset discovery. Consider one truth – any asset (hardware, software, protocol over the network, etc.) that is unknown intrinsically introduces risk. Why? Because any asset that is unknown is probably not being patched, properly configured, or otherwise maintained. That means it’s likely to have misconfigurations and vulnerabilities that will go unmatched. Further, unknown assets may have been introduced by a malicious actor and may be performing some nefarious activity. New assets of any type may be discovered using a combination of the techniques previously discussed.

Discovered assets that serve a useful business purpose may be brought under proper management and maintained, while assets that serve no purpose may be removed.

Organisational Issues with Monitoring

The effectiveness of monitoring can be impacted by political boundaries as well as technical ones. For example, in many organiSations, the network group controls the technical infrastructure and the IT group controls the endpoints. In some organiSations, the business units themselves, not traditionally associated with IT or networking, may subscribe to cloud services that are completely “off the radar”. OrganiSational issues such as these leave the security group with little power to enforce monitoring objectives. For example, how does the security group get the IT group to install monitoring agents on the servers and endpoints? Even worse, how does the security group get access to all the log data and user data that’s controlled by other groups? These hurdles crop up all the time – and must be considered when designing or maintaining an effective monitoring system.

Even more concerning is the fact that endpoints and networks are in a constant state of flux under the control of other parties. So what happens when the IT group or network group disables monitoring points that were previously operational? These issues force a requirement to “watch the watchers” – in other words, monitor that the monitoring points are indeed operational.

Modernising Your Monitoring

Monitoring, just like the threat environment, has evolved over time. We have discussed some trends that impact how monitoring can be effectively performed and the emerging tools to accomplish this monitoring. If you are still sniffing packets with an intrusion detection system at the perimeter, that’s OK but it’s not enough given the perforation of the perimeter, the emergence of cloud computing and the overall trend towards mobile computing. You need to evaluate how these trends are impacting your infrastructure and instrument accordingly with some of the technologies detailed in this article.

One final point – as more and more monitoring technologies are employed in your environment, the centralisation of the data from the various sensors becomes even more critical. You don’t want to have 10 different consoles to look at. Rather, you need to consolidate the data from the various sources into a single place that can correlate the data and present it effectively in an actionable dashboard format.

Join the CSO newsletter!

Error: Please check your email address.

Tags network monitoringapacUnknown and Shadow Assetssecurity strategyvunerablitiesTenable Network Securitycyber securitymonitoring

More about indeedLANVerizon

Show Comments

Featured Whitepapers

Editor's Recommendations

Solution Centres

Stories by Dick Bussiere

Latest Videos

  • 150x50

    CSO Webinar: The Human Factor - Your people are your biggest security weakness

    ​Speakers: David Lacey, Researcher and former CISO Royal Mail David Turner - Global Risk Management Expert Mark Guntrip - Group Manager, Email Protection, Proofpoint

    Play Video

  • 150x50

    CSO Webinar: Current ransomware defences are failing – but machine learning can drive a more proactive solution

    Speakers • Ty Miller, Director, Threat Intelligence • Mark Gregory, Leader, Network Engineering Research Group, RMIT • Jeff Lanza, Retired FBI Agent (USA) • Andy Solterbeck, VP Asia Pacific, Cylance • David Braue, CSO MC/Moderator What to expect: ​Hear from industry experts on the local and global ransomware threat landscape. Explore a new approach to dealing with ransomware using machine-learning techniques and by thinking about the problem in a fundamentally different way. Apply techniques for gathering insight into ransomware behaviour and find out what elements must go into a truly effective ransomware defence. Get a first-hand look at how ransomware actually works in practice, and how machine-learning techniques can pick up on its activities long before your employees do.

    Play Video

  • 150x50

    CSO Webinar: Get real about metadata to avoid a false sense of security

    Speakers: • Anthony Caruana – CSO MC and moderator • Ian Farquhar, Worldwide Virtual Security Team Lead, Gigamon • John Lindsay, Former CTO, iiNet • Skeeve Stevens, Futurist, Future Sumo • David Vaile - Vice chair of APF, Co-Convenor of the Cyberspace Law And Policy Community, UNSW Law Faculty This webinar covers: - A 101 on metadata - what it is and how to use it - Insight into a typical attack, what happens and what we would find when looking into the metadata - How to collect metadata, use this to detect attacks and get greater insight into how you can use this to protect your organisation - Learn how much raw data and metadata to retain and how long for - Get a reality check on how you're using your metadata and if this is enough to secure your organisation

    Play Video

  • 150x50

    CSO Webinar: How banking trojans work and how you can stop them

    CSO Webinar: How banking trojans work and how you can stop them Featuring: • John Baird, Director of Global Technology Production, Deutsche Bank • Samantha Macleod, GM Cyber Security, ME Bank • Sherrod DeGrippo, Director of Emerging Threats, Proofpoint (USA)

    Play Video

  • 150x50

    IDG Live Webinar:The right collaboration strategy will help your business take flight

    Speakers - Mike Harris, Engineering Services Manager, Jetstar - Christopher Johnson, IT Director APAC, 20th Century Fox - Brent Maxwell, Director of Information Systems, THE ICONIC - IDG MC/Moderator Anthony Caruana

    Play Video

More videos

Blog Posts