Thank you for your interest in our newsletters. You will receive an email shortly to confirm your subscription.
Jordan LaRose, Incident Response Consultant
16 mins read
I investigated an incident like this, shoulder-to-shoulder with the CISO of a global marketing firm. It’s one of the most difficult cases I’ve worked on and a testament to how difficult the decisions CISOs make can be. More than once, we thought we’d got the attacker, only to find that we’d once again been outsmarted. They were intelligent, cunning, and vengeful. As you read through the events that unfold, ask yourself what choices you would make in a similar situation.
The story begins at 35,000 feet, with the CISO on his way from the organization’s US headquarters to a meeting in Singapore. The marketing firm was some 50,000-people strong, with offices in every continent. It battled threats regularly and had a strong security posture, albeit one that took work to maintain across so many regions. It was this challenging synchronicity that the CISO hoped to improve. The success of his security policies had been firmly established at home. Now, he was on his way to present some of them in real-time to his colleagues overseas and encourage their adoption.
It was a crucial meeting, designed to convince members of the Singapore leadership team of their region’s crucial role within the larger, integrated security structure being put into place. The most anticipated part of the CISO’s presentation was the implementation of host-based firewalls, used to granularly protect individual hosts (rather than the perimeter) and control the spread of infections through the network. His plan: to open one of the laptops in the room, show the enabled firewall, then demonstrate its effectiveness.
The start of the presentation went smoothly and the CISO’s plans were well received. Upon opening a laptop for the big reveal, however, he realized the firewall was disabled. He collected himself and tried a second laptop, assuming that perhaps the roll-out hadn’t yet affected all hosts. Second time, unlucky again—the firewall was disabled. How could this happen? Everything was in place and confirmed before his flight. An awkward silence filled in the room. Each attendee checked their laptop, only to find that not one had the firewall enabled.
The CISO left the meeting immediately and called the sysadmin, Kevin, from his US team who had set up the firewalls. Kevin was adamant this was no fault of his: all the firewall settings had been established and a group policy created to ensure they remained active in the event any users rebooted their computers and the settings with them. Checking these settings while still on the line, his stomach dropped when he saw that his policy had vanished and been replaced with another. This one was set to turn off any firewall as it connected to the domain. The action was even logged with his name. Neither knew quite what to say as the pair struggled against the urge to second guess themselves and question the other’s intentions. The policy was reverted and the firewalls restored, and it was all chalked up to a simple mistake, but there was still a storm brewing on the horizon.
Fast forward a few weeks, and Kevin sat down at his desk to log on to his laptop. His password was rejected. Assuming he’d perhaps forgotten the expiry date, he performed a reset and thought little of it, until he was again logged off the system a few hours later. He reset the password once more and attempted to isolate the cause. Before making any progress, he was logged out again. This happened twice further until Kevin became convinced that something was awry. Clearly, there was someone else involved—whoever had removed his firewall group policy was messing with him again.
He took it upon himself to review the firewall and password reset activity. After some digging, he came across the IP address used to make the phony firewall group policy—an address he had no access to. On a call to the CISO, both agreed that something was amiss, even if the actual cause was beyond them. If it was an attacker, why hadn’t any alerts been triggered? With hesitation, our IR team was called in to unravel this knot.
It really wasn’t obvious to anyone—including me—whether we were looking at an attacker. Strange things do sometimes happen, especially when an organization’s digital infrastructure is vast and reliant on numerous third-party vendors.
After some investigation, we came across a program called “Adaxes”. Adaxes is used to manage Active Directory (AD) and turned out, in this case, to be where the unexplained password changes took place. Yet again though, Kevin had never used it, nor did he even have access. It was becoming clear (much to his relief) that any human error had nothing to do with him. Instead, we identified another sysadmin account, based in India and working for one of the client’s IT vendors, which appeared to have made the password update requests. It seemed we had uncovered the root cause. Stranger things definitely have come about through third-party activity, with network admins sometimes making updates and configurations without notifying their clients.
At this stage, no official incident had been raised by the client. A month had passed since the situation in Singapore, but we hadn't detected any malicious activity; there was no lateral movement or malware deployment. It really did look like the network was clean and the disruption had been caused by mistake.
The vendor organization was contacted. We waited and, after 2 long weeks, we heard back. The third-party Indian sysadmin no longer worked for the business, having left a few months back. How was their account being used then? With this question front of mind, our investigation soon established that Adaxes can enable the impersonation of other accounts. The Indian sysadmin in question had in fact been impersonated by yet another sysadmin, working for the same vendor, based in the same city, and with a similar name. The evidence we had to suggest that one had impersonated the other was the IP address. Tricky as it was, finally, it felt like we’d found our lead.
The CISO and his security team were keen to lock the situation down now that a third-party was suspected, and a task force was assembled to address the user directly. It was 12 hours until the working day began in India.
We continued investigating in the background to make sure our due diligence was done. The password resets and any links between the Indian sysadmin’s account (the impersonator) and other activity were still unexplained. Then, 3 hours before the task force was due to arrive at its destination, we saw something on the client’s proxy server. A range of unexplainable events had been taking place there whenever the sysadmin was active. Multiple IP addresses were bouncing around and the user connected to the password reset activity had been acting outside of India’s time zone. Another account had been accessing the proxy server at the same time as the resets, reducing the security controls on individual computers through group policy—except that user appeared to be based in Indonesia. It was strange enough for us to intervene and stop the taskforce just before it reprimanded the sysadmin in India. Surely something else was going on. We needed more information to make sense of the situation.
We reconvened with the CISO to dig deeper into the proxy server. Proxy servers are interesting because they’re not a host on the network. Access must be safelisted and any modifications are usually carried out physically. In IR, we rarely investigate them as a vector for compromise, because they're so difficult to attack and yield little reward if you do. The only meaningful benefit of attacking one would be gaining control of IP addresses.
We looked specifically for any outside users that had connected to the proxy server during the times of the disruptive events so far. Something stood out: 2 logins connected to internal and external sysadmins had been made, both from the same IP address. This one was based in the UK, which did indeed seem to correlate with those timings. Somehow, that single IP address was becoming 2 other IP addresses, defying the functionality of the proxy server. It looked like we had an attacker playing games to hide their tracks.
Seeing as the proxy server did appear to have been compromised, we embarked on the intricate task of taking it apart and sifting through the data to find more evidence. Eventually, we discovered that yes, it had indeed been compromised and was being used to swap the attacker’s IP address whenever they performed an action. This meant kicking another legitimate computer off its IP address, taking that address, performing whatever action they had planned, then reassigning it to the original host. On the surface, it would look like a user was performing this activity on their own machine as part of their normal workflow. Sometimes, the attacker was taking an IP address for just 10 seconds then swapping it back, so we had to be precise and methodical when tracing through the activity. They had masked themselves in real-time. If we were the bloodhounds in pursuit, the attacker was going to extreme lengths to throw us off their scent. I’d never seen this kind of obfuscation before, even from Russian state-backed threat actors.
Eventually, after an arduous process of attaching different actions of an IP address to the many active on the proxy server, we came across our real attacker—and what a twist it was. He had been a member of the client’s security team, spending years building many of the network systems from the ground up on the domain. Naturally, this meant he had plenty of pathways into systems that only he knew about. When he had re-entered the network after leaving the organization, there were backdoors available to him into the proxy server, the Adaxes server, and many others. His motive was simple: disruption and chaos. No financial gain, no exfiltration of sensitive data; this was purely bad blood. He had waited pointedly, with knowledge of the CISO’s Singapore meeting, to disable the firewall and watch the situation unfold. Once we discovered him, it was all too easy to connect the dots. It was a case of 006 from GoldenEye or Raoul Silva in Skyfall—an insider leaving the organization with a bone to pick and the power to wreak havoc.
As I said, this was one of the hardest incidents of my career, with so little to go on and an attacker that had the advantage of understanding their victim’s security capability inside out. If I’m honest, and I had been in the CISO’s shoes, I really don’t know how quickly I would have escalated this situation either. Sometimes, you’re just not ready to pull the trigger, but there are other lessons to be learnt here.
The story goes to show how different attackers act based on their motivations. Two attackers with identical toolkits will show nuance in their behavior. They are unique every time, so the job of a response team is to understand the human behind the computer, with their own preferences and reasons for doing things.
What’s more, it demonstrates the delicate balance of trust that’s needed between CISOs and their frontline team. Whilst any employee can pose an inside threat—especially those with access to sensitive information and the skills to hide in the network—they’re also your first line of defense and can draw your attention to indicators of compromise.
Finally comes your IR plan. Like we’ve said previously, “No plan survives first contact with the enemy. This adage doesn’t mean planning is futile, but rather that no single plan can offer certainty when you encounter a malicious attacker.” This is to say that a good IR plan is one that acknowledges that it hasn’t accounted for every scenario and is designed instead to help you think on your feet and know when a discrepancy is worth pulling on.