Nick Jones, Cloud Security Lead
7 mins read
The inherent security of cloud platforms has forced attackers to exploit misconfigurations and legitimate functionality in favor of using well known and identifiable tools such as Cobalt Strike, Bloodhound, or Mimikatz. While this is demanding more of attackers, it’s also demanding more of blue teams. Automated alerts set against known-malicious activities are less likely to give away a sophisticated attacker. Now, organizations need to sieve through legitimate BAU activity to find indicators of compromise. This requires blue teams to hunt out anomalous behavior: activity that should not be happening in a particular place or from a particular user. They can do this, so long as they have access to useable data. Here we explore how some existing cloud services can help with that task.
Detecting attacks in on-premise networks was never a stable artform, but 4 distinct areas tended to dominate the concerns of security professionals: asset management, log storage, security tooling logistics, and mixed/legacy systems. Migrations to the cloud have either nullified these problem areas or changed them significantly. Here they are below, as seen in both on-premise and cloud environments:
This shift tells us something: threat detection in the cloud has moved away from endpoint-based telemetry toward the telemetry of actions. Specifically, detection should be focused on actions taken in the control plane of the cloud. The control plane is the web of APIs that allow an administrator to create, modify, and destroy resources within a cloud environment.
Attackers will use these APIs to perform actions that move them closer to their goals. They may add new user accounts, modify the permissions of existing accounts, or grant access to resources from external locations. They may also use these native capabilities to perform actions on objectives by accessing sensitive data or creating additional resources to mine bitcoin.
All of this has consequences for blue teams, who must now understand the context of actions to uncover indicators of compromise. Broadly speaking, they will have to either learn what ‘normal’ looks like, or they will require threat models that predict the likely actions of an attacker. The latter approach often throws up extensive false positives, but can be the best option for an individual business.
In recent years, cloud providers have been strongly incentivized to innovate their provision of detection. There are now a number of services that either detect threats directly or provide telemetry for further analysis.
The most important of these services are the integrated security information and event management (SIEM) and attack detection systems, prominent examples of which are:
Amazon GuardDuty, which functions as an IDS for AWS accounts and resources
Azure Sentinel, which is a cloud-native SIEM capable of ingesting telemetry from Azure and other sources, and has out-of-the-box detections focused on Azure.
These are by no means a silver bullet, but they provide a valuable detection capability, and often have access to data sources not directly exposed to end users. They are particularly useful for organizations without significant dedicated security resources.
For larger organizations, however, the real benefit will come from the cloud providers’ telemetry tools, which can be broadly split into four categories:
Control plane activity logs provide audit logs for user-initiated activity at a control plane level within a cloud environment. These logs typically contain authentication data, access or API keys data, and information about whether the user logged in with multi-factor authentication (MFA). Key indicators of credentials being misused will be found here, as will indicators of enumeration activity, attempts to escalate privileges, or deploy new resources.
Network traffic logs can be used to identify malicious traffic and provide visibility into communications between systems within the cloud and to external systems.
Configuration change logs react to changes made within an environment. Depending on how they are configured, they can even auto-remediate specific high-impact attacker activity.
Service-specific logs, for example access logs for S3 buckets or logs showing which users used which keys to decrypt data. Ingesting too much data in this way can be very expensive, so the usefulness of service specific logs needs to be considered carefully on a case-by-case basis.
Here’s a table of some of the most useful services, organized by provider:
A good architectural approach will involve feeding the output of detection services into a centralized SIEM system, from which actions-out-of-place can be identified. But how do you stop your telemetry being compromised?
Separating all security services into their own accounts, subscriptions, or projects creates a hard permissions boundary, with write-only permissions to log destinations granted to project accounts and resources. It is also possible to segregate security even further to a separate cloud entity with a distinct single sign-on (SSO) platform. This is particularly useful in situations where access to the cloud is provisioned via on-premise Active Directory (AD), and can prevent an attacker from deleting or altering logs should they pivot to the cloud.
The cloud has changed detection. Attackers now increasingly reach their objectives solely through the abuse of legitimate functionality. When attackers are not pivoting into the cloud from on-premise systems, initial access is often a result of this abuse, be it through the compromise of open storage buckets, the exploitation of misconfigurations, or the use of accidentally exposed API keys.
Our top tip for successfully combating this threat is that organizations make the most of services that feed telemetry into a centralized SIEM. Making this telemetry ingestion as smooth as possible for new cloud environments, accounts, and subscriptions will be a critical factor in ensuring visibility across an organization's estate, especially as cloud adoption scales. The earlier an organizations does this in their cloud journey, the easier it will be.
Even more detail on how the cloud has changed detection can be found here.