When it comes to troubleshooting and threat detection, NetFlow AND packet capture trump all

This vendor-written tech primer has been edited by Network World to eliminate product promotion, but readers should note it will likely favor the submitter's approach.

Network World recently posted an article on how NetFlow beats packet capture  when it comes to network troubleshooting and threat detection. Although the article had many good points, it missed the mark on some important aspects of packet capture.

NetFlow is great for providing application usage information and can fulfill most organizations' needs for understanding application and service activity, but packet capture solves the most granular end-user problems and is essential when it comes to compliance and transactional analysis.

Packet-based analysis provides network engineers with a complete record of network activity, while NetFlow records only a finite, and often limited, set of statistics.

For example, let's say you suspect that inappropriate documents are being emailed out of the building. With a flow-based solution, you can see that the suspect is using email, and with a more sophisticated flow-based system you may even know that attachments are being sent. But the only way to verify the contents of the attachments is to have a recording of packets, both header and payload. Armed with packet-based details, you have the proof needed to confront the suspect.

As Gartner notes, flow analysis should be done 80% of the time and packet capture should be done 20% of the time. But if your enterprise needs both, why pick a product that only incorporates NetFlow? Wouldn't it be better to choose a product that shows summary level information (like that from flow-based systems) and detailed, packet-based analysis that can be used for root-cause network analysis?

The more comprehensive packet-based solution is always a better choice when it comes to network monitoring, analysis and troubleshooting.

When asked about the absence of packet-based analysis in an enterprise network, Jim Frey of Enterprise Management Associates, a leading industry analyst and consulting firm, said, "Teams will be faced with the increasingly likely reality that the data they need to definitively troubleshoot performance problems, particularly the more subtle/complex problems, will be missing, thus causing them to fall short of best practices in supporting those depending on quality IT services."

Before we discuss why a combined solution is better, let's first look at when NetFlow is most appropriate to use and where packet capture comes into play.

When should you implement NetFlow?

NetFlow, and other flow-based technologies like sFlow, JFlow and IPFIX, are simply specifications for collecting certain types of network data for monitoring and reporting. They use the existing infrastructure of network devices to gather this data. Flows, or unidirectional communications between network elements, are the basic data structure of all flow-based systems. Flow records are collected periodically, typically every minute, from supported network devices and are processed and stored by third-party flow collectors.

NetFlow is primarily used for overall network monitoring, trending and reporting, giving users a general view of the network, and to some degree, application performance. Since the data comes from existing network equipment, it seems "free" to the user (no appliances required to capture network data), although generation of flow-based data does put a strain on network equipment and can lead to problems when the data are most needed. It is great for solving problems like identifying bandwidth hogs and providing network usage reports to management, and because of its excellent reporting capabilities, net-flow based monitoring has become entrenched in enterprise networking.

When does NetFlow fall short?  Generally in three ways:

* NetFlow, and other flow-based analysis solutions, generate additional network traffic with the volume of traffic proportional to the size of the network segment being monitored. The typical packet size is around 1,500 bytes. These packets come in spurts that can range from tens of kilobytes to tens of megabytes for each reporting interval and each reporting device.

Communication takes place over UDP, so dropped packets can sometimes be a problem on busy network segments, and dropped packets cause holes in your analysis. These additional packet streams can put a strain on your network, especially when your network is most vulnerable, and most in need of high quality statistics. With a packet capture solution, all analysis is done within the packet capture appliance, so no additional demand is placed on the network and the network statistics hold their accuracy because packets aren't lost.

* Flow-based analysis also competes for hardware resources on the router or switch. If your router or switch is heavily used, it will focus first on the prime directive, then network routing, and this can compromise your flow-based reporting. This, of course, creates intermittent inaccuracies in your monitoring and reporting, and you lose essential data when you need it the most, i.e. when your network is experiencing heavy traffic, possibly due to an error.

If you are constantly pushing the limits of your network bandwidth, a flow-based solution generates an additional and unnecessary network load. In this situation, it is best to switch to packet capture, as it is entirely passive while still providing you with details needed to help discover why a problem is occurring on the network.

* Sampling is another important factor to consider with flow-based solutions. The default configuration for NetFlow is to monitor and develop flow records for 100 percent of the packets no sampling. However, it can be configured to "1 out of k" static sampling or the network device can simply switch to sampling mode if the network traffic gets heavy. If sampling is employed, it does not provide the full picture that you need to understand what is happening to your network and how to solve problems.

Even Cisco offers a dedicated blade that does packet capture, which is more proof that NetFlow alone is simply not enough to give you the rich data you need for troubleshooting and threat protection.

Packet capture to the rescue

Although there are several scenarios where using NetFlow will come in handy, you simply can't address 100% of your network issues with a flow-based solution. So, if you're going to put a packet-based solution in place for compliance, transaction validation and network and application troubleshooting, why not make it your primary solution for all levels of monitoring and reporting?

Compliance is essential for most, if not all, businesses. We all work under some level of oversight, whether it is corporate-imposed policies or government regulations requiring periodic reporting. Maybe your HR department has policies regarding inappropriate network usage, or you're in the medical field and periodically have to audit network traffic for HIPPA compliance. Packet-based network analysis analyzes each and every packet, from the header through the payload, and can archive each packet for post-capture analysis, providing the most granular level of data available for compliance verification.

Packet-based network analysis solutions can also be used to solve specific application issues. Some application issues are so granular that they may not rise to the top of the alerts or alarms that you have configured with your flow-based solution. Or, if you do get an alert, it will indicate that the user experience is poor, but does not provide the detail needed to analyze the issue.

For example, let's say a help-desk worker is experiencing long latencies when trying to access the web-based phone support application. Every time the user inputs a response to a question, it takes 1015 seconds for the application to respond, and oftentimes the response is simply a return to the input screen for the question that was just answered.

A sophisticated flow-based solution may report the long latencies (and some will not!), but determining the root-cause of this issue requires detailed, packet-based analysis. Using packet analysis, the network engineer can quickly isolate the packet traffic for the specific user and the specific application, look for the packets reporting slow server response time, and dig into the payloads to see that the database is reporting contention issues. The network engineer now has all the data needed to first prove that this is not a network problem, and then to help the application engineer figure out exactly what is wrong in the application logic that would cause a contention issue.

In terms of transaction validation, let's say you need to go back in time and determine if a specific transaction transpired between a server and a user. NetFlow technologies cannot determine this, since the specific transaction might be part of an overall flow between this client and server, and the system only has visibility down to the flow, not the particular transaction.

The only solution is packet capture and analysis to solve this issue. With the details provided by a packet-based solution, any specific transaction can be verified, like acknowledgements of credit card transactions, showing exactly where the transaction originated and removing any question as to the originator and their completion of the transaction.

NetFlow can provide a great deal of network visibility, along with some very detailed reporting, but there are times when only packet-based analysis will do. With packetbased network analysis, you get the most complete view of how your network and applications are performing. It provides both the overall level of network monitoring and recording that you get from flow-based solutions, while providing all the detail you need when you really have to dig in to solve a problem. So why deploy two systems when one will do just fine, 100 percent of the time?

Read more about wide area network in Network World's Wide Area Network section.

Join the CSO newsletter!

Error: Please check your email address.

Tags GartnersecurityWide Area Network

More about CiscoEnterprise Management AssociatesGartner

Show Comments

Featured Whitepapers

Editor's Recommendations

Solution Centres

Stories by Jay Botelho, director of product management at WildPackets

Latest Videos

  • 150x50

    CSO Webinar: Will your data protection strategy be enough when disaster strikes?

    Speakers: - Paul O’Connor, Engagement leader - Performance Audit Group, Victorian Auditor-General’s Office (VAGO) - Nigel Phair, Managing Director, Centre for Internet Safety - Joshua Stenhouse, Technical Evangelist, Zerto - Anthony Caruana, CSO MC & Moderator

    Play Video

  • 150x50

    CSO Webinar: The Human Factor - Your people are your biggest security weakness

    ​Speakers: David Lacey, Researcher and former CISO Royal Mail David Turner - Global Risk Management Expert Mark Guntrip - Group Manager, Email Protection, Proofpoint

    Play Video

  • 150x50

    CSO Webinar: Current ransomware defences are failing – but machine learning can drive a more proactive solution

    Speakers • Ty Miller, Director, Threat Intelligence • Mark Gregory, Leader, Network Engineering Research Group, RMIT • Jeff Lanza, Retired FBI Agent (USA) • Andy Solterbeck, VP Asia Pacific, Cylance • David Braue, CSO MC/Moderator What to expect: ​Hear from industry experts on the local and global ransomware threat landscape. Explore a new approach to dealing with ransomware using machine-learning techniques and by thinking about the problem in a fundamentally different way. Apply techniques for gathering insight into ransomware behaviour and find out what elements must go into a truly effective ransomware defence. Get a first-hand look at how ransomware actually works in practice, and how machine-learning techniques can pick up on its activities long before your employees do.

    Play Video

  • 150x50

    CSO Webinar: Get real about metadata to avoid a false sense of security

    Speakers: • Anthony Caruana – CSO MC and moderator • Ian Farquhar, Worldwide Virtual Security Team Lead, Gigamon • John Lindsay, Former CTO, iiNet • Skeeve Stevens, Futurist, Future Sumo • David Vaile - Vice chair of APF, Co-Convenor of the Cyberspace Law And Policy Community, UNSW Law Faculty This webinar covers: - A 101 on metadata - what it is and how to use it - Insight into a typical attack, what happens and what we would find when looking into the metadata - How to collect metadata, use this to detect attacks and get greater insight into how you can use this to protect your organisation - Learn how much raw data and metadata to retain and how long for - Get a reality check on how you're using your metadata and if this is enough to secure your organisation

    Play Video

  • 150x50

    CSO Webinar: How banking trojans work and how you can stop them

    CSO Webinar: How banking trojans work and how you can stop them Featuring: • John Baird, Director of Global Technology Production, Deutsche Bank • Samantha Macleod, GM Cyber Security, ME Bank • Sherrod DeGrippo, Director of Emerging Threats, Proofpoint (USA)

    Play Video

More videos

Blog Posts

Market Place