In part three of a series on understanding the processes and tools behind an APT-based incident, CSO examines the process of exploitation and installation. At this stage, things have started to go wrong, as the attacker(s) have been successful in delivering their malicious payload.
Make no mistake, if the attacker's campaign has made it this far, you have a problem, but you also have a chance to fix it.
At this point, the attacker has delivered an email with a malicious attachment, which if accessed, exploits a vulnerability in software that your organization uses. They're confident in their odds of success, because data collected during the reconnaissance phase told them what to target.
If the exploitation is successful, then the system is compromised and that's all there is to it. However, it is possible that the attacker(s) made noise while cracking your defenses. If so, evidence of their methods and the type of attack might be located in the network or system logs. In addition, proof of the attack may have been delivered thorough one of the various security event monitors used by your organization.
Unfortunately, if the exploitation went unnoticed the moment it happened, the odds are not on your side. According to the 2013 Verizon Business Data Breach Investigations Report, 66 percent of breaches remain undetected for months if not longer. If the breach is discovered, most of the times this only happens because of an unrelated third-party disclosing it.
Once exploitation takes place, the attacker(s) need to establish a foothold, this is where installation happens, and what most end-point protections guard against. The foothold is established by loading additional tools on the compromised host in order to maintain control.
This is also the stage where an attacker may pivot from the initial point of entry to another system or server on the network. Pivoting is useful when the attacker(s) enters the network at a point that isn't accessible to their overall goal. It's also useful as it helps them remain undetected. While the compromised laptop discovered was the initial entry point, the hijacked desktop that the attacker pivoted to on the network may go unnoticed.
Often times pivoting is possible due to exploitation of network policy, enabling the attacker(s) direct access to some systems because the host they've compromised is granted elevated permissions via the firewall or ACL on the network. If this is the case, they won't need to risk exposure by leveraging another exploit or additional malware to pivot; otherwise they'll do so using those means.
Incident response programs are geared towards the installation phase of an attacker's campaign, because prevention has failed, so response -- that is, containment, mitigation, and recovery -- are the only options left really. However, incident response is only possible if detection has occurred. But assuming the exploitation phase wasn't detected, and the installation phase was successful, what now? If you're lucky, you can detect some indicators of compromise, and use them to help move the incident response process along.
Indicators of compromise (IOC) are exactly what the name suggests, but they're often overlooked, because they exist in the mountains of logged data that's already on the network. No one has time to read the hundreds or thousands of lines logged in a day (if not more) so many indicators go unnoticed. This is why it can take weeks or months to detect a breach.
Suppose that the attacker(s) targeted an employee and exploited their system. One type of IOC would be records of that system accessing areas of the network that it doesn't have access to, or has never accessed before. The key here is abnormality. Look for the things that don't belong, or just seem out of place.
Another example would be to look for random, unexpected DNS requests. Attackers tend to call home to access additional tools, or their payloads will make external requests for instructions. Matching DNS requests to lists of known malicious servers, or IP addresses with bad reputations, is a positive IOC the moment there is a connection. This happens because the exploitation phase is one of the few times an attacker has to make noise, as the process will spike DNS in most cases.
What about watering hole attacks? What counts as a good IOC on webservers? This too will require shifting through massive amounts of logs, but if the webserver logs are flooded with 500 errors, permission errors, or path errors, you have a problem. SQL Injection (SQLi) and Cross-Site Scripting (XSS) attacks standout in these types of logs, as well File Include attempts. To be fair, 500 errors can also be benign. But when they appear alongside database errors, or only come from a single application or resource, that's the difference between broken code and targeted code.
Likewise, watch for 404 errors, and look at how they are being triggered. In many cases Web vulnerability scanners (XSS or SQLi), or bots probing an application, trigger these events. Finally, if you discover shell scripts (e.g., r57 or c99), usually because you've noticed a flood of random GET or POST requests in the logs -- unlike anything that's been seen before -- that's a blatant IOC for a webserver. In fact, a shell on the webserver is the worst discovery you can have short of proven data exfiltration. Because a shell means the attacker has control over everything.
When it comes to mitigations, many of the previously mentioned layers of protection still apply. In fact, some of those layers are ready-made for the exploitation phase. For example, Data Execution Protection (DEP) can go a long way to preventing malicious software from running on a compromised host.
While the attacker may be able to deliver the malware, of the victim attempts to execute it DEP will stop it. However, there are plenty of malware variants and software exploits that target DEP directly in order to avoid it. So you cannot rely on this protection alone.
Whitelisting is another good mitigation, but it isn't perfect and it is entirely possible to hijack a legitimate (whitelisted) application and make it do something it shouldn't. So again, along with DEP, whitelisting shouldn't be the only source of protection from exploitation.
"Another thing to remember about whitelisting is that we're talking about an attack that leverages a vulnerability in most cases, causing whitelisted files to act in unexpected ways, however they will still be whitelisted and thus able to execute. If the exploit then writes malicious routines to memory only, before hooking other legit processes, there will be nothing for whitelisting to see," commented Rik Ferguson, the VP Security Research at Trend Micro, during an interview with CSO for this series.
Anti-virus controls, such as reputation checks against IP addresses and software, are good backup layers of protection, as are the behavioral detections that most AV software suites offer. However, at the end of the day, AV is not a perfect solution, and if the exploitation phase used something unknown, AV may be rendered completely useless. The same can be said for Host-based IDS. At the same time, not having these options is worse.
Finally, maintaining software updates and patches, to both the OS and third-party programs will go a long way from preventing exploitation, as will controls that regulate privilege. The rule of least privilege is one that's often ignored within IT, but it's a useful tool nevertheless.
The point to all of these mitigations is layers. Alone, none of them can fully stop the exploitation and installation phases, but when layered together, the odds of preventing serious trouble increase.
Part four of this series will examine the next logical step for an attacker after exploitation, Command and Control.