Thank you for your interest in our newsletters. You will receive an email shortly to confirm your subscription.
5 mins read
While it is not possible to have a single response plan that addresses every potential attack, there are certain guidelines that can help you deal with a crisis situation that will help to preserve important evidence for the subsequent investigation. In the same way, there are common pitfalls from certain hasty actions. Consider the following IT responses – and their impacts – when managing an incident or breach:
A typical response our investigators encounter is where IT staff are running as many AVs as possible with the illusion this will help. AVs often change file systems, deleting malware and destroying the metadata crucial for an investigation. Moreover, they often retain little or no logs about what they found, where it was, and when it was put there. If you know there has been a major security incident, be aware that running one or multiple AV products is one of the most destructive actions you can do.
Patching systems is recommended, refer to recent news items if you need proof. However, when you find something un-patched on your internet facing website or internal server in the middle of an incident, remediating immediately is usually not the best approach. Bear in mind, when you make these changes, you will also destroy the environment that was the scene of the crime, making it more difficult to answer questions about the data at risk, how much of your estate was exposed and what it was vulnerable to. Lacking that information will slow the incident response process so it is important to preserve the scene before patching anything.
You can’t be hacked without power. Very true. However most of the information you will need to answer questions about an attack reside in live system memory. Without this information, questions such as the following will be difficult for the incident responders to answer:
If it’s not ransomware, attackers can do little with malware they can’t control so think twice before pulling plug in an effort to contain the attack.
When IT teams do find malicious files and malware on their systems, a common approach is to move it to a folder on their desktop called “malware” while using the infected system to do some backyard malware analysis. This usually results in the loss of important information regarding where the malware originated including date and time stamps, all whilst the still active attacker watches the IT team attempt to figure out what happened.
Virus Total is fantastic. However, when you upload malware to this and other sandbox services, the files are available for anyone else on that platform to download, tipping off the world as to what malware was active on your systems. Moreover, attackers usually monitor for when their malware has been uploaded to Virus Total as an early warning sign that you are on to them. Rather use the search function or switch to offline or private malware analysis sandboxes.
Blocking Command and Control (C2) channels is an important aspect of incident containment, however, it must be executed at the right time. Often the right time is not immediately once you’ve found it, as attackers can and often do have multiple C2 Channels. Blocking the C2 channels carries the risk of alerting the attackers, causing them to change their behaviour and start creating more channels in increasingly obscure ways or to become destructive. This will make it more challenging for incident response to get ahead of the attack. As such, make sure you balance having sufficient intelligence on the attack with the data leakage risk before deciding to block C2 channels.
When you rebuild a system, you lose almost all data about the attack, and will likely re-introduce the security risks that got it compromised in the first place. If you are in the middle of a major incident, any system being rebuilt is worth preserving first in case an investigation is needed. In normal day to day incident management, the same rule applies. Finding out that the source of an attack was a workstation you had in fact rebuilt is a hapless scenario as questions about how or what occurred will remain unanswered. Worse you can end up playing ‘whack-a-mole’ with the attacker as they repeatedly re-establish a beach head using the same attack vectors.
Operational IT and security teams need to work hand in glove when an incident is underway. Often, operations teams will help carry out instructions to move large amounts of data around for analysis. Usually, when space is limited, these teams will delete what they believe to be valueless data such as logs. In fact, these are exceptionally valuable to the incident investigation. Make sure your security and operations teams have first responder training and know what you are trying to achieve so that valuable data is not lost.
Attribution is a common request we receive from clients. Security teams believing an insider must be to blame is a common hindrance to the incident response process. The belief that you couldn’t have been hacked from outside is a natural one and this normally leads to looking internally. However, the administrators and managers of compromised systems who have the knowledge of how these systems work and what might have happened are the people you now suspect. Being on the wrong side of these individuals can obstruct an efficient investigation. Whilst insider threats are real, the vast majority are perpetrated from outside the organisation or the country, where local law enforcement cannot reach them. Un-authorised privileged account use is an expected attacker behaviour. Be observant, keep an open mind, but keep your internal team’s on-side until you have the full picture.
By far, this is the most common mistake security and response teams make: assuming the attacker did not use their access as leverage to move laterally towards their goal. Unless you have caught an incident early at the point of entry, assume the worst, it is safer in the long run. Look at what an attacker could have done, and where they could have moved. Test those theories to prove the worst did not happen. Our experience shows that assuming the best case scenario, usually leads to the worst case results.