The Wassenaar Arrangement, a multilateral export control regime, defines "intrusion software" as software specially designed or modified to avoid detection by monitoring tools, or to defeat protective countermeasures, of a computer or network capable device. Intrusion software is used to: extract data or information, or to modify system or user data; or to modify the standard execution path of a program or process in order to allow the execution of externally provided instructions.
Wassenaar states that monitoring tools are software or hardware devices that monitor system behaviours or processes running on a device. This includes antivirus (AV) products, end point security products, Personal Security Products (PSP), Intrusion Detection Systems (IDS), Intrusion Prevention Systems (IPS) or firewalls.
So… what we at F-Secure (and the rest of the antivirus industry) call "malware" appears to easily fit Wassenaar's definition of intrusion software.
Why is this interesting?
Well, the US Bureau of Industry and Security (BIS), part of the US Department of Commerce, has proposed updating its rules to require a license for the export of intrusion software.
And according to the Dept of Commerce, "an export" is –any– item that is sent from the United States to a foreign destination. "Items" include among other things, software and technology.
So… if malware is intrusion software, and any item is an export, how exactly are US-based customers supposed to submit a malware sample to their European antivirus vendor? Seriously, customers send us zero-day using malware all the time. Not to mention the samples that we routinely exchange with other trusted AV vendors from around the globe.
The text associated with the BIS proposal says the scope includes penetration testing products that use intrusion software in what looks like an attempt to limit "hacking" tools, but there is nothing about what is excluded from the scope. So the BIS might not intend to limit customers from uploading malware samples to their AV vendor, but that could be the effect if this new rule is adopted and arbitrarily enforced. Or else it could just force people to operate in a legal limbo. Is that what we want?
In the past few days, we received some cases from our customers in Italy and Spain, regarding malicious spam e-mails that pointed to Cryptowall or Cryptolocker ransomware.
The spam e-mails pretended to come from a courier/postal service, regarding a parcel that was waiting to be collected. The e-mails offer a link to track that parcel online:
When we did the initial investigation of the e-mails from our standard test system, the link redirected to Google:
So, no malicious behavior? Well, we noted that the first two URLs were PHP. Since PHP code is executed on the server side, not locally on the client, it is possible that the servers were 'deciding' whether to redirect the user to Google or to serve malicious content, based on some preset conditions.
Since this particular spam e-mail is written in Italian - perhaps only a customer based in Italy would be able to see the malicious payload? Fortunately, we have Freedome, so we can travel to Italy for a little while to experiment.
So we turned on Freedome, set the location to Milan and clicked the link in the e-mail again:
Now we see the bad stuff. If the user is (or appears to be) located in Italy, the server will redirect them to a malicious file hosted on a cloud storage server.
The e-mail spam sent to Spanish users is similar, though in those cases, a CAPTCHA challenge is included to make the site seem more authentic. If the link in the e-mail is clicked by a user located outside Spain, again we end up in Google:
If the site is visited instead from an Spanish IP, we get to the CAPTCHA screen:
And then to the malware itself:
This spam campaign doesn't use any exploits (so far), just old-fashioned social engineering; infection only occurs if the user manually downloads and executes the files offered on the malicious URLs. For our customers, the URLs are blocked and the files are detected.
Malware authors certainly do not take a breather when it comes to inventing new tricks for detecting sandbox, a very useful system to automatically analyze millions of samples nowadays. Recently, Seculert unveiled an unprecedented sandbox detection method that was employed by the Dyre/Dyreza malware. We have seen similar anti-sandbox tricks used by the notorious Tinba banking trojan and would like to discuss our findings, which could be helpful in improving sandbox technology. Joe Security has also seen the same evasion technique seen in Tinba in one of their samples.
User interaction combo detection
In the latest Tinba sample, we found it adopted an evasion technique where it checks for mouse movement using the GetCursorPos API. Additionally, its author has also introduced a new way to detect sandbox using the GetForeGroundWindow API, which enables the malware to check on the active window which the user is currently working on.
An automated sandbox system typically stays in the same window, and this could be a desktop from the point where the malware was executed. The malware tries to take advantage of this situation by checking for the values returned by two consecutive calls of the GetForeGroundWindow API. There is a couple of seconds interval between the two calls to simulate a real user interaction with the window. If the sample was executed on a sandbox environment, the values returned by both GetForeGroundWindow API calls will always be the same. This indicates that the current active window remains the same since the sample was executed. In this case, the code will keep looping and will only execute the main routine until the active window has been changed and the mouse cursor has been moved.
I hope I'm a real machine
Before executing its main routine after the user interaction detection, Tinba will employ another trivial evasion technique that is somewhat similar to the detection of the number of CPU cores done by Dyre/Dyreza. In Tinba's case, it looks for the number of cylinder available on the running machine. Basically, this is similar to checking for the disk capacity. Perhaps due to the ease of implementation, it only checks for the number of cylinder on the disk using the ioctl code IOCTL_DISK_GET_DRIVE_GEOMETRY_EX instead of the finding out the physical size of the disk. In this particular case, it determines whether the disk has at least 5000 (0x1388) cylinder, which is around (41GB), otherwise the sample will quit. Just like detecting the number of CPU cores, checking for the disk capacity could be an effective evasion technique because regular computer machines nowadays should have more cores and disk space.
Tinba demonstrates that it can detect a sandbox simply by testing user interaction on a window and by checking for the disk capacity on a machine. When hardening their sandbox technology, other sandbox providers should keep in mind that malware authors are relentless in pursuing new ways to evade detection and thus, should make adjustment accordingly to keep up with them.
Tinba sample used in the analysis - 5c42e3234b8baaf228d3ada5e4ab7e2a5db36b03
Note (18 May 2015): Corrected text to change the calculated figure for the number of cylinders from 11GB to 41GB.