Dr David Chismon, Security Consultant
10 mins read
In recent years, it seems that every week a major company or organisation is revealed to have been compromised. Furthermore, publicised breaches of the past were typically compromises of a single application and its data whilst breaches are now often much large in scope, occurring over months or years and resulting in the vast majority of an organisation's valuable data being stolen. This contrasts with ever increasing spending on information security by organisations. Clearly traditional focus on prevention and classic detection isn't working.
Currently, detection is all too often based on signatures alone. Be it Anti Virus for hosts or IDS rules for network data, the majority of detection is geared up for low effort and automated detection of known threats. The problem is that these signatures are often easily evaded. In some cases changing as little as a single bit can mean the malware isn't recognised by a signature for it and attackers whose campaigns have never been uncovered have little to fear from signature based detection. A competition called "race to zero" at the hacker conference Defcon in 2008 had competitors take known samples of ten different malware familes and attempt to modify the samples so that popular anti-virus products could no longer detect them. Four different teams were successful with all ten samples with one team achieving it in just over two hours.
In recent years, "Threat Intelligence" has promised to close this gap and allow detection of attackers by providing feeds of information on attacker malware and communications. However, in reality the majority of "Threat Intelligence" is simply more signatures by another name and higher price. Essentially, signature based detection was criticised so heavily that the marketers had to rebrand and now the industry is excitedly discussing the same old solutions by this different name. However, the 2015 Verizon Data Breach report did note that threat intelligence was of debatable value unless someone could consume all possible feeds available, and noted that despite this, 70-90% of malware samples are unique to an organisation.
Signature based detection definitely has a place – it is typically low effort, low false positive and relatively cheap. However, it is not sufficient to catch more advanced attackers. Detection is best achieved with a focus on three areas: endpoint analysis, log analysis and network traffic analysis. The unified analysis and correlation of all three provides the best chance of detecting attackers early on.
A particularly powerful technique is anomaly analysis. For example, in an organisation the majority of endpoints will have similar programs starting at boot time. By looking across the organisation to find the one or two computers that are starting something in addition to what all the others are starting, organisations might be able to spot malware for which no signatures exist.
Network traffic provides similar opportunities. For example, analysis of an organisation's user agent strings typically reveals clusters of large numbers of hosts using the same user agent string, with connections that fall outside those clusters often worth investigating. A piece of malware pretending to be Internet Explorer 8 may stick out like a sore thumb when all the legitimate web communications are coming from IE10.
Volume based analysis can be particularly powerful as well, for example large unexpected transfers of data between hosts may indicate aggregation of files prior to an exfiltration, just as a large number of failed logins to a server may indicate a brute force attempt.
One significant problem with detecting attackers is the "needle in the haystack" problem. Out of the gigabytes of data flowing around your network, how do you know which connections are the attacker stealing data? From the millions of lines of log entries, how do you find the few that are an attacker logging in?
Honeypots present one opportunity to help achieve this. Instead of trying to find a needle in a haystack, you try to find a needle in an empty room. Instead of looking at the logs from a busy file server that a significant proportion of your employees are using, you look at a server that no one is using. Suddenly you're in a position where any connection is worthy of investigation.
Honeypots are often implemented as a host (or something pretending to be a host) that is not being used by employees. When an attacker attempts to port scan the host or log in, the honeypot alerts the security team. Honeypots can often be used for research as well, the Kippo SSH honeypot mimics an SSH server but allows an attacker in if they appear to be brute forcing. It then mimics a server so that researchers and defenders can see what an attacker was trying to do. Many honeypot services and self-implemented honeypot servers follow a similar idea, with the host either emulating normal behaviour or heavily instrumented to get as much information as possible about the attacker, their tools and intent. For instance, SysDig on Linux allows for instrumentation of much of what happens on the host allowing for easy understanding of what the attacker was doing.
However, just having honeypot hosts misses other opportunities. Defenders should take everywhere they are trying to detect attackers and work out whether a honeypot could be designed around it. For example, so called "honeycreds" are where unused privileged (or normal) accounts are set up and any attempt to log in to those accounts alerts the security team. This way if an attacker has identified domain admins and attempts a default password it becomes immediately obvious.
Individual files can also be ‘honeypotted’. By studying the types of files attackers try to collect, and by evaluating the files your organisation considers to be its most valuable assets, fake versions can be created. Monitoring access of those files can reveal an attacker snooping around, or an overly nosey employee.
Computer Network Attack (as opposed to espionage) can be even harder to detect before the event than attackers focused on data theft. Rather than attackers probing round the network, stashing data and exfiltrating, all of which might be detected by anomaly detection, CNA typically involves an initial compromise and then positioning for the attack, with malware going silent until the attack. In some cases attackers may wait for a significant time, with malware checking in occasionally before launching the destructive attacks. During this period there may be no active behaviour to detect.
Honeypots offer a potential hope here as if an attacker can be tempted to interact with a honeypot during attack positioning, it might reveal the attacker's presence. As with the use of honeypots to detect data theft, this requires working out what an attacker may target to destroy or severely impair and then creating a fake version. For example, SCADA or production machinery honeypots might reveal an attacker positioning themselves to destroy a production environment.
Honeypots are unfortunately not a silver bullet that "solves cyber". There are a number of issues that have to be considered.
Firstly honeypots are only useful if an attacker interacts with them. Whilst pentesters might scan an entire network with a port scanner and then try to find default passwords, very few real attackers are seen to do this. A honeypot that just sits on an unused IP may alert you during your annual pentest but sit there silently whilst Dancing Flamingo (or whichever attack group is feeling keen at the moment) pilfer all the data. Likewise, an attacker may not bother trying to get into a fake domain admin account if they have found shared local admin passwords across the estate. Maximising the usefulness of a honeypot therefore requires a lot of thought as to how an attacker is likely to act on your network and how they might "stumble" across the honeypot.
This leads onto the second problem, that in any organisation, a defensive team's time and resources are often limited. If organisations aren't careful, honeypots can end up as another distraction for an already stretched team. Time spent planning, deploying, maintaining and monitoring honeypots, as well as investigating the alerts generated by the occasional nosey employee or pentester can prevent teams spending time on other important activities. Honeypots therefore need to be considered as part of a wider defensive strategy so that organisations are sure that the time and effort spent on honeypots is not overshadowing other priorities.
Honeypots are typically designed to be compromised and so organisations need to ensure that this doesn't accidentally give attackers a position that they didn't have previously. This risk can be minimised by honeypots that mimic systems rather than being actual systems themselves. There is an argument sometimes made that if you set up a domain admin "honeyaccount", setting an intentionally weak password means that should an attacker try to brute force such accounts, they will be successful with the "honeyaccount" before they are with a real one. The alert generated by the successful login can then be investigated as a high priority. However, this is very dangerous as the attacker now has a domain admin account where they did not before and if they move quickly they may stay ahead of the defensive team for some time.
Whilst real attackers typically don't port scan internally, some do search the domain for hosts that have names or descriptions that sound interesting and connect to them. Therefore, organisations may want to list the honeypot in the domain or even have it domain joined to increase the chance an attacker will interact with it. However, unless done carefully, this risks giving the attacker domain credentials or some other unforeseen advantage.
The final significant area of concern with honeypots is that if an attacker realises they are in a honeypot, they may respond. Although few reports of such behaviour exist, an attacker may switch from data theft tactics to destructive attacks if they think they have been detected. Alternatively, they may change their tactics and tools to make themselves even harder to detect. Honeypots therefore are most effective when they alert the defenders they have caught something but do not tip off the attackers in the process.
One concept that is being seen more is "honeypotting as a service" where organisations can buy honeypotting services that reduce the time and effort required to set up and monitor a honeypot, as well as more user friendly investigation of attacker activities.
Another concept emerging is that of adaptive honeypots and "active defence". The idea of active defence is one where defence doesn't simply involve alerting defensive teams but actively responding to attackers once detected. The nature of the response can vary. It may be enhanced monitoring, where detection causes automatic flagging of all other behaviour from that account or IP address. Other ideas that have been discussed are segregation or isolation of attackers. Networks can be reconfigured on the fly to place the attacker in a zone from where they are limited in the damage they can do. This might simply be network isolation or potentially some elements of a fake network can be created to both distract the attacker and to study what they do and what they may be after.
Such "active defences" are sometimes coupled to the idea of honeypotting, as automated response to attacks can only safely be done where detection of attacks has low false positives. Organisations are highly unlikely to adopt a system where employees might be frozen out of business systems unnecessarily. Honeypots, if implemented correctly can offer the low false positive rates needed.
However, as with regular honeypots, future directions and versions need to be evaluated the same as more traditional honeypot strategies as they may suffer from the same drawbacks.
Preventing breaches and particularly the resulting impact is difficult if not impossible and many current strategies are being revealed to be desperately insufficient. Detection must be predicated on the idea that prevention will eventually fail and so ideas such as honeypots, that may reveal an attacker regardless of their tools, are deeply attractive. A defender's goal should be to strive for the "Attacker's Dilemma" as posited by Richard Beijtlich, whereby if defences are effective enough, an attacker need only make one mistake and their whole campaign can be detected and unravelled.
However, honeypots are not silver bullets and should only ever be a component of a mature and asset-focussed defensive strategy. As with all defences, organisations should rigorously evaluate the cost and effort against realistic benefits of honeypots. Crucially, organisations need to ensure that if used, honeypots are used in a way that offers the best chance of realising the desired benefits.