Thank you for your interest in our newsletters. You will receive an email shortly to confirm your subscription.
January 2018 (updated October 2020)
6 mins read
As attacker techniques, tactics, and procedures (TTPs) evolve, organizations are now, wisely, seeking to improve their detection capability in sync.
Our article about tackling detection challenges highlights the vital combination of trained human insight and suitable technology. As Gartner (2017) contends, “the shift to detection and response approaches spans people, process, and technology elements.” Technology alone is not the solution.
The global cyber security skills shortage cramps the growth and development of teams with the right skills. This perhaps leads to the reliance on tooling, bought on the assumption that it detects attacks without fail. This is not the case. Reliable and effective detection requires users to tactically deploy and tune their tools, and it relies on an understanding of environmental context and relevant attacks.
Security monitoring teams’ struggle to develop a meaningful detection capability can also often be attributed to an overreliance on signature-based detection or alerts and playbooks that bear no relation to modern attack techniques.
An effective security monitoring team analyzes relevant, current data sources and continually tests processes to improve detection methods (such as creating new alerts) in line with current attacker techniques. As the detection capability evolves, the risk to the business is reduced.
Having seen these challenges faced by many organizations, our recommendations for an effective detection capability follow these factors:
Security monitoring teams must understand what they are responsible for protecting. At the highest level, this may be a threat assessment, or business risk assessment, where the business identifies its critical assets and judges the relative priority of foreseeable events. A lack of collaboration or risk perception can lead to poor implementation or total oversight, causing inaccuracies in the design, implementation, and management of security monitoring structures. Monitoring responsibilities should ideally be allocated as early as possible.
To succeed, a security monitoring team should know the intricacies of their environment—the battlefield on which a cyber attack will play out. Knowing the architecture, design, strengths, and weaknesses of their IT infrastructure helps these defenders predict an attacker’s most likely paths of entry and onward route. This enables the implementation of preventative measures and monitoring at strategic locations.
Defensive teams with an understanding of an attacker mindset are better equipped to anticipate malicious actions and utilize detection by pushing the enemy along a specific “trip wired” path. If successful, this can result in a strategic ambush. Such opportunities rely on well-tuned technology and rehearsed use cases.
Use cases constitute a set of rules that, when broken, notify the detection team of a security incident. They are used to monitor various log sources, and range in complexity:
It is not good practice to rely solely on the use cases provided as standard by monitoring technologies or on alerts driven by statistically anomalous behavior. Creating well-designed and bespoke use cases enables analysts to reliably detect attacker actions with a low signal-to-noise ratio and drive their investigation and containment.
Effective use cases that alert security teams to incidents in need of immediate investigation or intervention result from a perception of high-potential, high-risk threats. Identifying the most likely adversaries, their motivations and TTPs inform use case design and documentation. This further provides analysts with the knowledge to succeed in future investigation and containment. A good use case will, for example, contextualize an alert in the lifecycle of an attack; analysts may then anticipate an attacker’s consequent actions and uncover further avenues of investigation.
Applying context to use cases also affirms the true meaning of an alert. For example, if an alert is triggered by an attempt to disable antivirus software, its meaning depends on the related host—triggered from a domain controller that alert is likely more critical than from a developer’s machine.
When deciding which detection technologies to deploy, the deployment location is just as important. If controls are placed only at the perimeter of your estate, for example, the scale of an attack may not be realized; a sophisticated attacker could quite easily bypass the perimeter without triggering an alert. For some organizations, the notion of a perimeter does not even truly exist, with the introduction of software-as-a-service (SaaS) platforms like Github, Slack, or Office 365. This approach alters the tactics and necessary mindset of analysts and defenders.
Deployment of detection and preventative controls throughout the whole estate realistically reflects the lifecycle of a modern attack. Lockheed Martin’s cyber kill chain acts as an effective guide when mapping controls to an attacker’s methodology. To detect modern attack techniques, organizations require a variety of log sources, including (but not limited to) endpoint detection (usually a software agent), network-level monitoring, and logs from pre-existing preventative controls, such as anti-virus.
Organizations considering detection in the cloud can also consider new opportunities for telemetry in these environments. Whatever the nuances of your environment, technology’s effectiveness depends on the human insight used to configure it and the experience gathered over time from using it.
Regular testing that simulates modern attack techniques will substantiate an organization’s defense measures against the ever-changing threats it faces. Outcome-based testing that emulates the attacks most likely to be deployed and affects the business help develop effective detection that demonstrates the return on investment. Once use cases have been proven to alert on the actions a reak attacker would perform, testing can be automated and repeated regularly to ensure a consistent level of detection over time. This will free up time for analysts, engineers, and testers to develop new and more complex use cases that detect attacker actions beyond existing automated alerts.