NEWS FROM THE LAB - June 2012
 

 

Friday, June 29, 2012

 
Die Zeit Uses Six Months of Mobile Data to Profile Green Politician Posted by Sean @ 16:06 GMT

Have you ever wondered just how much your phone reveals about you?

That's what Green party politician Malte Spitz wanted to discover…

A 2008 German law required all telecommunications providers with more than 10,000 customers to retain six months worth of data on all calls, messages and connections. Germany's Constitutional Court ruled the law unconstitutional in 2010.

Spitz acquired (meta)data from his telecom provider covering a period from August 2009 to February 2010. Zeit Online has made the raw data available via Google Docs. To demonstrate just how much of a personal profile can be crafted, Zeit Online augmented the data with publicly available information such as Spitz's tweets and blog entries.

And the result is an incredibly cool interactive map:

Vorratsdatenspeicherung
Source: http://www.zeit.de/datenschutz/malte-spitz-data-retention

You can read the details from Ziet Online's (somewhat inflammatorily titled) story: Betrayed by our own data.

 
 

 
 
Thursday, June 28, 2012

 
"Parental Control" Feedback Posted by Sean @ 10:10 GMT

From Reddit:

"My friends 7 year old sister left this note for her parents on their computer."

http://imgur.com/I5uDU

 
 

 
 
Tuesday, June 26, 2012

 
Summer Listening: Anonymity Posted by Sean @ 16:52 GMT

Recommendation: two recent Tech Weekly interviews covering the topic of online anonymity.

In the first part, host Aleks Krotoski speaks with 4chan founder Chris Poole. Near the end of the interview, Poole suggests a site such as 4chan offers "front end" anonymity whereas something like Tor offers "back end" anonymity. It's a prescient comment…

Krotoski's second interview just happens to be with executive director of the Tor software project, Andrew Lewman.

Excellent analysis of anonymity and its importance. Well worth a listen.

The podcasts can either be streamed or downloaded.

Tech Weekly, Christopher Poole Tech Weekly, Andrew Lewman

Protip: Note the "2X" option in the screenshots above. Many English language podcasts are easily understandable at double playback speed. To take advantage of the option with iOS, be sure to set the audio type to "podcast" with any manually added files.

 
 

 
 
Thursday, June 21, 2012

 
Commoditization vs. Specialization Posted by Sean @ 14:07 GMT

Another rebuttal to Mikko's Flame related opinion column in Wired magazine, this time from Bruce Schneier.

Schneier's summary of Mikko's argument:

"His conclusion is simply that the attackers — in this case military intelligence agencies — are simply better than commercial-grade anti-virus programs."

But Schneier doesn't buy it:

Schneier Security, June 2012

A couple of points.

First, regarding military malware's supposedly slow and stealthy spread. It's relative. Compared to something such as Conficker, most "non-military" malware is as quiet as a mouse. It's as stealthy as it needs to be.

Second, actually… Flame didn't really "spread". It was used in targeted attacks. Think sniper bullet, not germ warfare. (Stuxnet is a different story. But it wasn't supposed to spread in-the-wild.)

Third, if conventional malware writers want to evade detection they should adopt Flame's techniques? Look… most "conventional" malware writers don't actually use the malware they author. They sell it as a service. Buyers and users of malware kits have to pay for stealth. It isn't free. The real difference between crimeware and Flame/Stuxnet/DuQu is commoditization vs. specialization.

Let's use a real-world example.

Here's a screenshot from Securitas, a global provider of security services that employs more than 300,000 people.

About Securitas

And this is Iranian nuclear engineer Majid Shahriari's car soon after he was assassinated in November 2010 by unidentified assailants riding motorcycles that launched separate bomb attacks and detonated them from a distance.

Majid Shahriari

Look carefully.

Can you spot the difference between the services Securitas typically provides and the protection Shahriari would have needed?

 
 

 
 
Tuesday, June 19, 2012

 
Foreign Policy's Twitterati 100 Posted by Sean @ 11:59 GMT

Foreign Policy magazine has published its Twitterati 100 — a Twitter list of notables in the foreign-policy Twitterverse.

Presenting, Mikko H. Hypponen: Finnish cybersecurity expert… geek.

Finnish cybersecurity expert, geek

Hmm, why are the "Geeks" listed at the end? :-)

 
 

 
 
Thursday, June 14, 2012

 
It's Signed, therefore it's Clean, right? Posted by Sean @ 12:59 GMT

Here's a slide from a CARO 2010 presentation called: It's Signed, therefore it's Clean, right?

Ways Of Abusing Authenticode

Hmm, MD5 forgery? Yeah, that's been going around lately (#Flame).

Download the presentation here.

 
 

 
 
Wednesday, June 13, 2012

 
ZeroAccess's Way of Self-Deletion Posted by ThreatResearch @ 09:19 GMT

We normally see malware developing and evolving over the years. One particular malware we've been following is ZeroAccess, which has been continuously improving which we first detected it in late 2010. Case in point: in the latest samples, its self-deletion routine has changed.

This is a simple Windows batch file ZeroAccess used to use to remove itself after execution, as a fast and simple way to hide any traces of its presence from the user (click for larger view):

zeroaccess_selfdelete (11k image)

Lots of other malware use this batch file self-deletion method. Recently though, it looks like ZeroAccess wants to be a bit more different and make things more complicated for analysts. It uses the following piece of code (shown without comments) to achieve this:

zeroaccess_selfdelete_nocomments (20k image)

Now, ZeroAccess uses dynamic forking of Win32 EXE to execute code in another process�s context, but with a twist. Basically, instead of loading a Win32 EXE into another process�s memory space, ZeroAccess prepares a customized stack and inserts that into another process's context, where it gets executed according to the sequence in the stack.

The commented code below shows the difference between the method used by ZeroAccess and the traditional way:

zeroaccess_selfdelete_commented (40k image)

For this to work, ZeroAccess also modifies the instruction pointer register to point to Windows' native API, ZwWaitForSingleObject. Once the modification and the customized stack are in place, the malware is all set to do some bad and then disappear.

When the remote process is executed after ResumeThread is called, it will first execute the ZwWaitForSingleObject pointed to by the modified instruction pointer register.

This function will wait until the caller process has been terminated and then resume execution on the remote process. It executes the next instruction at the top of the stack to close the terminated process handle, and then continues to execute the next function as the stack grows.

Eventually, this will execute the function for deleting itself by using FileDispositionInformation parameter to ZwSetInformationFile. The diagram below summarizes the custom stack's operations (click for larger view):

zeroaccess_selfdelete_customized_stack (64k image)

On a side note, the latest ZeroAccess is compatible with its older rootkit-capable variants, since we found a similar piece of code in both, which checks for the rootkit device object, \??\ACPI#PNP0303#2&da1a3ff&0 :

zeroaccess_selfdelete_check_existence (21k image)

On a machine with a ZeroAccess rootkit installed, it returns the specially crafted value STATUS_VALIDATE_CONTINUE; if not, the value STATUS_OBJECT_NAME_NOT_FOUND is returned. This check allows the latest variant to skip over some of its routines if it finds that the machine has already been infected by one of the older, rootkit-enabled variants.

On a final note, our customers are protected from both old and new ZeroAccess variants by various signature, heuristic and cloud-based detections.



Post by � Wayne


 
 

 
 
Tuesday, June 12, 2012

 
No, Heavy Salting of Passwords is Not Enough, Use CUDA Accelerated PBKDF2 Posted by Jarno @ 10:52 GMT

I have been following online discussions centred around recent password leaks (LinkedIn, eHarmony, Last.fm) and it seems that there are still many developers who hold a very strong belief that salt values will make passwords safe.

Even if attacker would have the salt, the common rationalization seems to be that an attack isn't practically feasible, because it would take forever to go through 14 characters keyspace, and thus salt must be making things safe. One could say that developers are grasping at salt like a small child is grasping his teddy bear, trusting that it will keep all evil crackers at bay.

Unfortunately humans are generally not random number generators. A very nice piece of research from Francois Pesce provides compelling evidence of this.

What Francois is basically doing is that he is attacking a big password database, something like LinkedIn's, with a dictionary attack and then adding any cracked passwords to his dictionary for subsequent rounds. This automatically optimizes the attack against passwords that people tend to use for that particular service. And he is doing his experiment without using rainbow tables, which means that his experiment setup also covers situation where passwords would be salted but attacker has obtained salt values.

This is a very effective method of finding typical passwords that people tend to use, and then find additional permutations of those common passwords, thus picking all the low hanging fruits in the database.

Who cares that an attacker is unable to crack the 20 to 40% of accounts because of well chosen passwords if he is able to get 60 to 80% of your users' accounts?

So how to defend against this?

Normal users should never choose a password that contain words, long passphrases are a bit better, but they can also be surprisingly weak against a GPU assisted dictionary attack.

Because most of users will never use strong passwords,developers should switch to slow hash implementations and use unique salt value per user. But there seems to be surprising reluctance on switching to a slow hash, as developers fear that they will run out of CPU cycles if too many users try to authenticate at same time.

To counter the problem of too many valid users logging in at the same time, you can steal a page from the attackers' book. Calculate user password hash checks with a GPU, just like the enemy does.

nvidia_cuda (134k image)

For example, NVidia's CUDA platform is ideal for integrating into a password authentication server. Using CUDA, you get a level playing ground with your enemy. Even with a single CUDA capable card per password server, you can compute password hash checks in 1ms, so that the enemy will also need 1ms to try to crack that password, which means that instead of billions of attempts per second, an attacker will be limited to thousands of attempts per second.

Of course an attacker can get, let's say 100 CUDA or powerful ATI cards, but that would be prohibitively expensive and would provide such an attacker with only 100,000 attempts per second, not 230,000,000,000 attempts per second.

Unfortunately, there is no ready drop in integration of CUDA or ATI GPU support for web or other applications, but hey, that's what the open source and developer community is for.

I could not find an open source library for using CUDA in password authentication, but any of the open source cracking tools can be easily adapted. For example: http://code.google.com/p/pyrit/, which is intended for WPA/WPA2-PSK but could also be used to check passwords.

Post by � @jarnomn

 
 

 
 
Flame is Lame Posted by Mikko @ 10:36 GMT

When the Flame malware was found two weeks ago, it was characterized as 'Highly advanced', 'Supermalware' and 'The biggest malware in history'.

These comments were immediately met with ridicule from experts who were quick to point out that there was nothing particularly new or interesting in Flame.

In fact, the only unique thing in Flame seemed to be its large size. Even that was not too exciting as analysts went digging for examples of even larger malware and indeed found them (some malware tries to look like video files so they carry full-length movies inside their bodies).

Suggestions that Flame was created by a government and, like Stuxnet and Duqu, would be the product of a nation-state were met with ridicule as well.

But let's have a look at what we've learned about Flame over these two weeks.

1. Flame has a keylogger and a screengrabber

They naysayers are unimpressed. "We've seen that before. Flame is lame."

2. Flame has built-in SSH, SSL and LUA libraries

"Bloated. Slow. Flame is still lame."

3. Flame searches for all Office documents, PDF files, Autodesk files and text files on the local drives and on network drives. As there would easily be too much information to steal, it uses IFilters to extract text excerpts from the documents. These are stored in a local SQLLite database and sent to the malware operators. This way they can instruct the malware to hone in on the really interesting material.

"Flame is lame"

4. Flame can turn on the microphone of the infected computer to record discussions spoken near the machine. These discussions are saved as audio files and sent back to the malware operators.

"Flame is lame, lol"

5. Flame searches the infected computer and the network for image files taken with digital cameras. It extracts the GPS location from these images and sends it back to the malware operators.

"Still, Flame is lame"

6. Flame checks if there are any mobile phones paired via Bluetooth to the infected computer. If so, it connects to the phone (iPhone, Android, Nokia etc), collects the Address Book from the phone and sends it to the malware operators.

"Flame is still lame, kind of."

7. The stolen info is sent out by infecting USB sticks that are used in an infected machine and copying an encrypted SQLLite database to the sticks, to be sent when they are used outside of the closed environment. This way data can be exfiltrated even from a high-security environment with no network connectivity.

"Agent.BTZ did something like this already in 2008. Flame is lame."

8. When Flame was now finally caught, the attackers have been busy destroying all evidence and actively removing the infections from the affected machines.

"Doesn't prove anything. Lame."

9. Latest research proves that Flame is indeed linked to Stuxnet. And just one week after Flame was discovered, US Government admitted that they had developed Stuxnet together with the Israeli Armed Forces.

"You're just trying to hype it up. Still lame."

10. Flame creates a local proxy which it uses to intercept traffic to Microsoft Update. This is used to spread Flame to other machines in a local area network.

"Lame. Even if other computers would receive such a bogus update, they wouldn't accept it as it wouldn't be signed by Microsoft".

The fake update was signed with a certificate linking up to Microsoft root, as the attackers found a way to repurpose Microsoft Terminal Server license certificates. Even this wasn't enough to spoof newer Windows versions, so they did some cutting-edge cryptographic research and came up with a completely new way to create hash collisions, enabling them to spoof the certificate. They still needed a supercomputer though. And they've been doing this silently since 2010.

"�"

And suddenly, just like that, the discussion on whether Flame is lame or not �vanished.

 
 

 
 
Monday, June 11, 2012

 
Still using the default setting on your password safe? Don't. Posted by SecResponse @ 15:52 GMT

Last week, we republished a post called: Are you sure SHA-1+salt is enough for passwords?

To continue on the topic of passwords: not only should you use a proper iteration count when implementing password hashing in code — the same thing also applies to password safe software such as KeePass.

As strong passwords are pain to remember, many people opt to use KeePass or other password managers, and then copy the password manager to one or another sync service. Passwords can then be available on all devices whether a desktop, laptop, phone or tablet. However, this brings a potential problem. The password file is more likely to end up in the wrong hands if one of the devices is compromised, stolen or the sync service is hacked.

An obvious defense for this is to use a strong password on the password database file. But strong passwords are a pain to enter on a mobile phone. And so that causes many people to use shorter passwords than is wise. A greater than 14 character password or passphrase is the proper way of doing things but we all know that most people just won't do it.

One can mitigate the problem of a short password in mobile use by adjusting key iteration count in the password manager configuration. Common wisdom is to set the iteration count so that it takes about 1 second to verify password on slowest device your are using.

For example, if you use KeePass the default key derivation iteration count is 6,000. On the typical mobile phone you can get about 200,000 iterations per second. So by setting a proper key iteration count you make password cracking ~33 times more expensive for attacker. Of course adding one character to your password gives about the same protection and adding two characters gives about 1024 times better protection. But that is no reason to leave the key iteration count to a ridiculously low default value.

Here's KeePass on a Windows laptop, set to a value of 4,279,296:

Number of key encryption rounds

And a free tip to anyone who is developing mobile password manager: the low CPU power of mobile devices seriously limit the key iteration count from proper figures, which should be around 4-6 million instead of hundreds of thousands. So how about using the phone's GPU for password derivation? Using that you could have a proper iteration count for key derivation, and you will have a more level playing ground against password crackers which use GPU acceleration.

Post by — @jarnomn

 
 

 
 
Friday, June 8, 2012

 
Seeking Internet Security 2013 Beta Testers Posted by Sean @ 10:22 GMT

Greetings, folks! Our Innovation & Customer Involvement team has a special offer (just for you!) that they've asked us to write about. So without further ado…

We present to you F-Secure's Internet Security 2013 Beta.

Try Internet Security 2013

What's new?

What's New with Internet Security 2013

Support for Google Chrome. Or put another way, browser indepent online safety, client-side, not tied to any particular browser. Our Firewall approach has been adjusted. New DeepGuard, our behavioral engine. (And plenty of other cool new tech under the hood.)

More info here.







 
 

 
 
Rescue CD Beta Released Posted by Alia @ 08:32 GMT

Yesterday we released a beta version of our Rescue CD tool.

The beta has mainly incremental changes to existing functionality, such as updating the Knoppix OS to version 6.7.1 and now being bootable from USB.

Rescue CD

As usual, the beta is for testing only and we strongly recommend against using it on production machines. For our standard Rescue CD tool, go here instead.

Full details are on the Rescue CD beta download page, or you can get the package directly from here (ISO, 143 MB). Be sure to read the release notes and user guide before installing.

 
 

 
 
Thursday, June 7, 2012

 
Redux: Are you sure SHA-1+salt is enough for passwords? Posted by Sean @ 10:05 GMT

Yesterday, LinkedIn confirmed reports that some member passwords have been compromised.

Here's some info from their blog:

"It is worth noting that the affected members who update their passwords and members whose passwords have not been compromised benefit from the enhanced security we just recently put in place, which includes hashing and salting of our current password databases."

Hashing and salting? Is that enough? That's the question our own Jarno Niemela asked last year in this reprinted post (with updates).

—————

The anarchic Internet group called Anonymous recently hacked HBGary Federal and rootkit.com, an online forum dedicated to analyzing and developing rootkit technologies. All user passwords at rootkit.com have been compromised.

Given this compromise, I'd like to point out one of my favorite topics in application security — password hashing.

I've forgotten your password again, could you remind me?

It's all too common that Web (and other) applications use MD5, SHA1, or SHA-256 to hash user passwords, and more enlightened developers even salt the password. And over the years I've seen heated discussions on just how salt values should be generated and on how long they should be.

Unfortunately in most cases people overlook the fact that MD and SHA hash families are designed for computational speed, and the quality of your salt values doesn't really matter when an attacker has gained full control, as happened with rootkit.com. When an attacker has root access, they will get your passwords, salt, and the code that you use to verify the passwords.

And this is the assumption any security design should be based on; an attacker has access to everything that is on the server.

Salt is primarily intended to prevent precomputed attacks, also known as rainbow tables. And a common assumption has been that as long as precomputed attacks are prevented, passwords are relatively safe even if attacker would get the salt value along with user password.

But MD and SHA hash variants have been designed for computational speed, which means that an attacker can easily get billions of brute force attempts per second when using a video graphics display card for processing.

See: http://www.golubev.com/hashgpu.htm

Which means that even with single ATI HD 5970, an attacker can cover password space equivalent to a typical rainbow table (2^52.5 hashes) in 33 days. And it's a safe bet that a serious attacker will have more than one card for the job.

When an attacker has your salt values and code, the only thing that is protecting user accounts is the strength of passwords they are using, and people are not very good sources of entropy. By combining dictionary attack and brute force techniques it will not take very long to break a significant proportion of passwords, even for a large site with many accounts.

So what should be done to avoid this?

The first thing to consider is that passwords are very much like safes in the real world, what matters is not only the length of the code needed to open the safe that protects the contents, but also how long each attempt takes.

This clearly means that SHA1 or any other plain hash algorithm is clearly a no go for secure password authentication.

What you want to use is something that will not be trivial to brute force. Instead of doing 2300 million attempts per second, you want something that limits an attacker to 10,000 or 100,000 attempts per second.

And while using salt values is vital to proper implementation, it is not a silver bullet which will make your problem go away.

This requires a password hashing scheme that fulfills the following properties:

  •  Computational time required can be adjusted easily when processing power increases
  •  Each user can have unique number of iterations
  •  Each user hash is unique so that it is impossible to find out if two users have same password by comparing hashes

There are several such schemes to choose from:

  •  PBKDF2 http://en.wikipedia.org/wiki/PBKDF2
  •  Bcrypt http://www.openwall.com/crypt/
  •  PBMAC http://www.rsa.com/rsalabs/node.asp?id=2127
  •  scrypt http://www.tarsnap.com/scrypt.html

Each of the alternatives has their strengths and weaknesses, but all of them are far stronger than general purpose hash implementations such as SHA1+salt.

So if you are working with passwords, pick one of the schemes above, determine the number of iterations it takes your server check the password for the desired length of time (10, 200ms, et cetera) and use that. Have a unique salt value and iteration count for each user — anything that forces the attacker to focus on each account separately rather than being able to try against all accounts on each iteration.

Original post (and comments) here.

 
 

 
 
Tuesday, June 5, 2012

 
A Pandora's Box We Will Regret Opening Posted by Mikko @ 10:56 GMT

If somebody would have told me five years ago that by 2012 it would be commonplace for countries to launch cyberattacks against each other, I would not have believed it. If somebody would have told me that a Western government would be using cybersabotage to attack the nuclear program of another government, I would have thought that's a Hollywood movie plot. Yet, that's exactly what's happening, for real.

Cyberattacks have several advantages over traditional espionage or sabotage. Cyber attacks are effective, cheap and deniable. This is why governments like them. In fact, if Obama administration officials would not have leaked the confirmation that the U.S. government (together with the Israelis) was behind Stuxnet, we probably would have never known for sure.

In that sense, it's a bit surprising that the U.S. government seems to have taken the credit � and the blame � for Stuxnet. Why did they do it? The most obvious answer seems to be that it's an election year and the voters like to see the president as taking on adversaries like Iran. But we don't really know.

The downside for owning up to cyberattacks is that other governments can now feel free to do the same. And the United States has the most to lose from attacks like these. No other country has so much of its economy linked to the online world.

Other governments are already on the move. The game is on, and I don't think there's anything we could do to stop it any more. International espionage has already gone digital. Any future real-world crisis will have cyberelements in play as well. So will any future war. The cyberarms race has now officially started. And nobody seems to know where it will take us.

By launching Stuxnet, American officials opened Pandora's box. They will most likely end up regretting this decision.

Mikko Hypponen

This column was originally published in the Room for Debate section of The New York Times. Be sure to read the two other opinions from Ralph Langner and James Lewis.

 
 

 
 
Monday, June 4, 2012

 
Microsoft Update and The Nightmare Scenario Posted by Mikko @ 14:09 GMT

About 900 million Windows computers get their updates from Microsoft Update. In addition to the DNS root servers, this update system has always been considered one of the weak points of the net. Antivirus people have nightmares about a variant of malware spoofing the update mechanism and replicating via it.

Turns out, it looks like this has now been done. And not by just any malware, but by Flame.

The full mechanism isn't yet completely analyzed, but Flame has a module which appears to attempt to do a man-in-the-middle attack on the Microsoft Update or Windows Server Update Services (WSUS) system. If successful, the attack drops a file called WUSETUPV.EXE to the target computer.

This file is signed by Microsoft with a certificate that is chained up to Microsoft root.

Except it isn't signed really by Microsoft.

Turns out the attackers figured out a way to misuse a mechanism that Microsoft uses to create Terminal Services activation licenses for enterprise customers. Surprisingly, these keys could be used to also sign binaries.

Here's what the Certification Path of the certificate used to sign WUSETUPV.EXE looks like:

Flame

The full details on how this functionality works is still under analysis. In any case, it has not been used in large-scale attacks. Most likely this function was used to spread further inside an organization or to drop the initial infection on a specific system.

Microsoft has announced an urgent security fix to revoke three certificates used in the attack.

The fix is available via — you guessed it — Microsoft Update.

Here's an animated screenshot showing what the update does: it adds two certificates issued by Microsoft Root Authority and one by Microsoft Root Certificate Authority to the list of Untrusted Certificates.

Flame

Having a Microsoft code signing certificate is the Holy Grail of malware writers. This has now happened.

I guess the good news is that this wasn't done by cyber criminals interested in financial benefit. They could have infected millions of computers. Instead, this technique has been used in targeted attacks, most likely launched by a Western intelligence agency.

 
 

 
 
Saturday, June 2, 2012

 
On Stuxnet, Duqu and Flame Posted by Mikko @ 11:58 GMT

A couple of days ago, I received an e-mail from Iran. It was sent by an analyst from the Iranian Computer Emergency Response Team, and it was informing me about a piece of malware their team had found infecting a variety of Iranian computers. This turned out to be Flame: the malware that has now been front-page news worldwide.

When we went digging through our archive for related samples of malware, we were surprised to find that we already had samples of Flame, dating back to 2010 and 2011, that we were unaware we possessed. They had come through automated reporting mechanisms, but had never been flagged by the system as something we should examine closely. Researchers at other antivirus firms have found evidence that they received samples of the malware even earlier than this, indicating that the malware was older than 2010.

What this means is that all of us had missed detecting this malware for two years, or more. That's a failure for our company, and for the antivirus industry in general.

It wasn't the first time this has happened, either. Stuxnet went undetected for more than a year after it was unleashed in the wild, and was only discovered after an antivirus firm in Belarus was called in to look at machines in Iran that were having problems. When researchers dug back through their archives for anything similar to Stuxnet, they found that a zero-day exploit that was used in Stuxnet had been used before with another piece of malware, but had never been noticed at the time. A related malware called DuQu also went undetected by antivirus firms for over a year.

Stuxnet, Duqu and Flame are not normal, everyday malware, of course. All three of them were most likely developed by a Western intelligence agency as part of covert operations that weren't meant to be discovered. The fact that the malware evaded detection proves how well the attackers did their job. In the case of Stuxnet and DuQu, they used digitally signed components to make their malware appear to be trustworthy applications. And instead of trying to protect their code with custom packers and obfuscation engines � which might have drawn suspicion to them � they hid in plain sight. In the case of Flame, the attackers used SQLite, SSH, SSL and LUA libraries that made the code look more like a business database system than a piece of malware.

Someone might argue that it's good we failed to find these pieces of code. Most of the infections occurred in politically turbulent areas of the world, in countries like Iran, Syria and Sudan. It's not known exactly what Flame was used for, but it's possible that if we had detected and blocked it earlier, we might have indirectly helped oppressive regimes in these countries thwart the efforts of foreign intelligence agencies to monitor them.

But that's not the point. We want to detect malware, regardless of its source or purpose. Politics don't even enter the discussion, nor should they. Any malware, even targeted, can get out of hand and cause "collateral damage" to machines that aren't the intended victim. Stuxnet, for example, spread around the world via its USB worm functionality and infected more than 100,000 computers while seeking out its real target, computers operating the Natanz uranium enrichment facility in Iran. In short, it's our job as an industry to protect computers against malware. That's it.

The truth is, consumer-grade antivirus products can't protect well against targeted malware created by well-resourced nation-states with bulging budgets. They can protect you against run-of-the-mill malware: banking trojans, keystroke loggers and e-mail worms. But targeted attacks like these go to great lengths to avoid antivirus products on purpose. And the zero-day exploits used in these attacks are unknown to antivirus companies by definition. As far as we can tell, before releasing their malicious codes to attack victims, the attackers tested them against all of the relevant antivirus products on the market to make sure that the malware wouldn't be detected. They have unlimited time to perfect their attacks. It's not a fair war between the attackers and the defenders when the attackers have access to our weapons.

Antivirus systems need to strike a balance between detecting all possible attacks without causing any false alarms. And while we try to improve on this all the time, there will never be a solution that is 100 percent perfect. The best available protection against serious targeted attacks requires a layered defense, with network intrusion detection systems, whitelisting of trusted apps and active monitoring of inbound and outbound traffic of an organization's network.

This story does not end with Flame. It's highly likely there are other similar attacks already underway that we haven't detected yet. Put simply, attacks like these work.

Flame was a failure for the antivirus industry. We really should have been able to do better. But we didn't. We were out of our league, in our own game.

Mikko Hypponen

This column was originally published in Wired.com.

Edited to add:

Mikko's column has generated some feedback we'd like to share.

Rebuttal: Got One Part Right; You Fail by @attritionorg
correcting a rebuttal by @imaguid

 
 

 
 
Friday, June 1, 2012

 
Tool: DNS Check #DNSChanger Posted by Sean @ 16:05 GMT

An Estonian company called Rove Digital was busted last November. Why? Because it was a front for the ad-fraud DNSChanger botnet. And ever since November, the USA's FBI has been responsible for the substitute DNS servers designed to keep compromised computers from being disconnected (and causing support call chaos).

Back in March, we wrote about the looming expiration of the FBI's authority. Fortunately, that authorization was extended until July.

According to Google, roughly half a million instances of DNSChanger still exist in the wild and the company recently began to notify people of the problem using this message.

The Shadow Server Foundation has an impressive visualization of infections:



YouTube: DNSChanger Infections

So now you may find yourself asking: how can I check for a DNSChanger infection?

The DNSChanger Working Group has an extensive list of sites which will check for problems.

F-Secure Labs also has something to offer: DNS Check.

F-Secure DNS Check

It's a script-based tool that can be used to reset problematic DNS settings.

DNS Check will scan to determine if the computer's DNS is configured to use the botnet's servers (now the FBI's) and can be used to reset default settings to DHCP, OpenDNS, or Google DNS.

FTP download: DNSCheck.zip

SHA1: 026b19bfbeeb2e02a9d4157f6fffa82ffcb62ab9 – DNSCheck.hta
SHA1: 5ddd867dc15a3398610868f06daec541278d8b16 – README.txt
SHA1: 2adedec5ceb4009d9b705cb6d9cb4c323dddc9a1 – admin_console.bat
SHA1: dcc8408c05cec84e4ac7420e6f7036c91e708ee2 – .\images\fsecure-logo.png
SHA1: a3630f948bb4d7b6c97318a50c5ad25fa85dca14 – .\images\icon.ico







 
 

 
 
NYT: Obama Order Sped Up Wave of Cyberattacks Against Iran Posted by Sean @ 09:07 GMT

This:

Olympic Games, Stuxnet, Obama

From the New York Times:

"…an element of the program accidentally became public in the summer of 2010 because of a programming error that allowed it to escape Iran�s Natanz plant and sent it around the world on the Internet."

A programing error unleashed Stuxnet on the Internet?

Wow. Thanks for that.

Question: To whom may the antivirus industry and its affected customers send the bill for the collateral damage done?