NEWS FROM THE LAB - January 2013
 

 

Thursday, January 31, 2013

 
More Facebook Graph Search Suggestions Posted by Sean @ 17:25 GMT

Yesterday as I was testing Facebook's Graph Search, which is in Beta, I searched for the following: women who live in Helsinki, Finland and who like sushi. (I wanted something that would get lots of results. It did.)

At the end of the day, I cleared my search history.

Then today, a sponsored story for a Helsinki-based sushi restaurant appeared in my News Feed.

FacebookSettingsLimits

Perhaps it's just a coincidence…

In any case, today, continuing my testing, I searched for people with my name who live in Finland. (The result: me and another guy.) Graph Search will definitely make it easier for your Facebook profile to be found by others.

Here's a couple of things to check on just to make sure you don't have anything exposed.

First of all, consider limiting all of your old posts. Most of the profiles that I've observed make good use of current privacy controls, but some have pre-2010 legacy posts which are public.

FacebookSettingsLimits

Secondly, edit your likes. (facebook.com/"profile.name"/favorites)

FacebookSettingsLimits

At least limit each category to friends rather than public, especially if your posts are generally only shared with friends.

FacebookSettingsLimits

It is important to note that liking a Facebook Page is always public, so "liking sushi" will still end up being searchable by Graph Search. But at least with adjusted settings, the rest of your likes will be in a "black box" that only your friends can see by browsing.

Also, maybe you should unlike some of the stuff you've accumulated over the years while you're editing the privacy settings?

A hat tip to @rik_ferguson for reminding me of the Likes settings.

Regards,
Sean







 
 

 
 
New York Times Hit with Targetted Attacks Posted by Mikko @ 07:02 GMT

The New York Times had a major scoop today — on The New York Times. Turns out, they were hacked.

In fact, they were hacked for several months. Chinese hackers stole the corporate passwords for every employee of New York Times. In addition, they gained access to home computers of several of the journalists.

These attacks started right after the newspaper published a revealing investigation on the relatives of Wen Jiabao, China's prime minister.

New York Times hacked
Screenshot of an article written by David Barboza. His e-mail account was breached by the attackers.

It's worth noting that no customer data was stolen. These attackers were not interested in making money. They wanted to spy on The Times.

China's Ministry of National Defense's response to the allegations: "To accuse the Chinese military of launching cyberattacks without solid proof is unprofessional and baseless".

Journalists have been targeted by similar attacks before. In some cases, journalist names have been used as a lure in targeted attacks.

For some commentary on the news from Chinese side, see this blog post from Liz Carter.

P.S. Attacks like this will be discussed in detail in the upcoming CARO Workshop in Bratislava. See 2013.caro.org for more info.

 
 

 
 
Wednesday, January 30, 2013

 
Facebook's Graph Search: Clear Your Searches Posted by Sean @ 12:32 GMT

I'm testing out Facebook's new Graph Search today.

Graph Search: Friends of my friends who are women and live near Helsinki, Finland. Result: More Than 1,000 People (actually 799).

Facebook Graph Search

Let's try something a bit more personal.

Graph Search: Friends of my friends who are single women and live near Helsinki, Finland and are older than 30 and younger than 39. Result: No Results. Sorry, we couldn't find any results for this search.

Facebook Graph Search

No results. And in fact, there are only ten women that actively listed themselves as "single", and only two of those listed their age.

Conclusion? Facebook privacy settings aren't all that mysterious for the average user.

However… what about my own Search history in my Activity Log?

Facebook Graph Search

What's that, you ask?

Facebook logs searches. To see yours, go to your Timeline, click on the Activity Log button, click More, and then click Search. It's way down at the bottom of the list (almost as if Facebook doesn't want you to find it).

Facebook Graph Search

You can purge your search history using the "Clear Searches" link in the upper right corner.

Facebook Graph Search

Hopefully this clears out any additional data which advertisers could use to target your account.

Regards,
Sean

 
 

 
 
Tuesday, January 29, 2013

 
Universal Plug and Pray Posted by Sean @ 14:57 GMT

From the files of things that really shouldn't surprise us: Rapid 7 released a white paper today on its research of the global exposure of Universal Plug and Play (UPnP) enabled network devices.

Rapid 7, Security Flaws in Universal Plug and Play

The results are impressive.

"Over 80 million unique IPs were identified that responded to UPnP discovery requests from the internet. Somewhere between 40 and 50 million IPs are vulnerable to at least one of three attacks […]. The two most commonly used UPnP software libraries both contained remotely exploitable vulnerabilities."

If you're a network administrator, be sure to check it out. Rapid 7 is offering a tool called ScanNow UPnP (which requires Java RE) that can identify exposed UPnP endpoints in your network.

Edited to add: Cert.org's Vulnerability Note VU#922681. A hat tip goes to @BrianHonan.

 
 

 
 
Friday, January 25, 2013

 
10th Anniversary of the Slammer Worm Posted by Mikko @ 12:28 GMT

This is how January 25th started for us, 10 years ago:

Jan 25 05:31:54 kernel: UDP Drop: IN=ppp0 SRC=207.61.242.67 DST=80.142.167.238 TTL=117 ID=30328 PROTO=UDP SPT=2201 DPT=1434 LEN=384

The above snippet is the first log we have of what become known as the Slammer worm (or Sapphire or SQL Slammer).

Slammer produced tons of network traffic. Here's an old screenshot from average.matrix.net, showing how the global packet less skyrocketed due to the worm.

slammer

Here's our original warning sent out on the worm:

F-Secure warns the computer users about new Internet worm known as Slammer. The worm generates massive amounts of network packets, overloading internet servers. This slows down all internet functions such as sending e-mail or surfing the net.

The worm was first detected in the Internet on January 25, 2003 around 5:30 GMT. After this the worm quickly spread worldwide to generate one of the biggest attacks against internet ever. According to reports, several large web sites and mail servers became unavailable.

Slammer infects only Windows 2000 servers running Microsoft SQL Server, and is therefore not a threat to the end user machines. However, its functions are still visible to the end users by the way it blocks the network traffic.

The worm uses UDP port 1434 to exploit a buffer overflow in MS SQL server. The worm is extremely small, only 376 bytes in size. It has no other functionality than to spread further, but the spreading process is so aggressive that the worm generates extreme loads.

As the worm does not infect any files, an infected machine can be cleaned simply by rebooting the machine. However, if the machine is connected to the network without applying SP2 or SP3 patches for MS SQL Server, it will soon get reinfected.

We've never seen such a small virus do so much damage so fast. Technical description and pictures of Slammer are available at https://www.f-secure.com/v-descs/mssqlm.shtml (Note: the link still works in 2013).

It's remarkable how small Slammer was. The whole worm fit into a single UDP packet. Basically, the worm would fit in 5 tweets. Here's the whole code:

slammer

Slammer was followed by Blaster and Sasser later in the year. They all produced some remarkable real-world problems:

slammer

Slammer kept us busy for several days. My old email archive had this overtime report for the first day:

slammer

So it was me, Katrin Tocheva, Gergely Erdelyi and Ero Carrera decoding Slammer on a Saturday in 2003. I hope the weather was bad… but I don't remember any more.

Mikko

 
 

 
 
Wednesday, January 23, 2013

 
University Courses on Reverse Engineering and Malware Analysis Posted by SuGim @ 08:56 GMT

Today marks the commencement of the first lecture for our spring 2013 semester Reverse Engineering Malware course for the Aalto University (Espoo campus) in Finland.

As with the previous courses we've done, this program is taught by researchers from our Helsinki Security Lab. The program teaches students about what malicious code is, how it can be analyzed, and how to reverse engineer executable code for different platforms, such as Windows and Android. Students will explore a variety of topics, including binary obfuscation and exploits. The course will also include non-technical topics such as ethics and legal issues related to information security.

As is usual for our courses, students get a very hands-on approach to learning, which includes solving reverse engineering puzzles like the one created by our own researchers below:

homework

On the other side of the world in Kuala Lumpur, Malaysia – where our other Security Lab is located – we are also collaborating with lecturers from Monash University's School of Information Technology (Sunway Campus) to launch a similar course.

monash

For the first time, students will be offered a Malware Analysis course, with a syllabus that places a greater focus on analyzing malware targeting the Android platform.

This course will include brand new lecture and lab materials to help students gain a broader perspective of this field and develop the specialized skills needed for analyzing malware. Subjects covered in the lectures and lab sessions include understanding the Android security framework, its operating and file systems and static and dynamic analysis of malware.

 
 

 
 
Saturday, January 19, 2013

 
Year 2038 problem Posted by Mikko @ 19:53 GMT

2038
Today is the 19th of January, 2013. Which means 19th of January, 2038 is now exactly 25 years away from us.

Why does it matter? Because at 03:14:07 UTC on 19th of January 2038 we will run into the Year 2038 Problem.

Many Unix-based system can't handle dates beyond that moment. For example, common Unix-based phones today won't let you set the date beyond 2038. This applied to all iPhones and Androids we tried it on (iOS is based on BSD and Android is Linux). Obviously this does not apply to Windows Phones, which let you set the date all the way to year 3000.

Yes, 25 years is a long time. But Unix-based systems will definitely still be in use at that time. And some things can start failing way before 2038. For example, if your Unix-based system calculates 25-year interests today, it better not be using time_t for the calculations.

 
 

 
 
Friday, January 18, 2013

 
Computer Security Circa 1990 Posted by Sean @ 12:49 GMT

Hackers with a cause. They're a danger to your corporate network!

This 1990's set of computer security films were originally produced as a wake-up call for the executives at AT&T Bell Laboratories.



1990's big innovation? Individual Network Access Passwords (INAP).

Yep, individual passwords were an innovation more than 20 years ago. Thank goodness we don't use those anymore, eh?

Wait, what? We do??

Oh… crap.

From Wired UK: Hacked: Passwords have failed and it's time for something new

 
 

 
 
Wednesday, January 16, 2013

 
Protecting Against Attacks Similar to "Red October" Posted by SecResponse @ 15:13 GMT

The targeted attack campaign dubbed Red October raises an interesting question for people working on the frontline of corporate security. How to defend one's own organization against such attacks? And the good news is that at least for campaigns such as Red October, the information has been available for a long time already.

From a technical point of view, the targeted attacks used by Red October look very much like any other corporate espionage. The attackers need to get a user to click on an interesting looking document, and then the program being used to view the document needs to be vulnerable to attack, after which the system needs to allow a payload to be written to disk, after which the payload needs to be able to communicate back to a C&C server.

So in order to foil the attack, we as defenders need to be able to prevent any of the stages and then the attack is failure from a data stealing point of view, even as there might be need for cleanup.

The first and most obvious defense is of course user education, all users should be trained to be suspicious of any documents coming from external sources. Especially if they are not expecting that party to send a document. But unfortunately a moment of inattention is all that it required to open something that should have just been deleted. Thus education alone is not enough.

The second layer of defense is obviously up to date and well configured corporate security software. Our own F-Secure Client Security would have alarmed about actions performed by the Red October exploit payload. However, the important thing to remember is that in order for any modern security software to be its most effective, you should allow the software to talk to the back end servers. It is a very common and frustrating situation that a corporation allows Internet connected browsers, but configures the workstation�s security software so that it cannot be part of a real-time protection network.

A third layer of defense is to use Microsoft�s EMET application memory handling hardening and exploit mitigation tool. We tried running Red October associated exploit files with EMET enabled using the recommended settings and the exploit was stopped and was not able to take over the system.

EMET, Red October

A fourth layer of defense is to use Microsoft Applocker and prevent execution of files that are not signed or are not otherwise well known and trusted by system administration. With Applocker the payload dropped into %programfiles%\Windows NT\svchost.exe in Windows XP or %appdata%\Microsoft\svchost.exe in Vista/7 would not have been able to execute.

A fifth layer of defense is to use DNS whitelisting and allow only well known domain names to resolve without prompting it first from the user, preferably with CAPTCHA. We have done research in C&C domains used by known corporate espionage attacks, and DNS white listing has been ~99% effective in preventing exploit to C&C communication.

If you would like to know more about the methods listed here or are curious as to what else we recommend in addition to using our product, I suggest reading slides for presentations about information and corporate security hardening against malware and targeted attacks.

"Making Life Difficult For Malware" [PPTX] was originally presented at theh T2 Information security conference in October 2011, later at Blackhat in May 2012, and covers technical hardening of the operating system and applications against targeted and other exploit based attacks.

"Protecting against computerized corporate espionage" [PPTX] was originally presented at T2 2012 and covers what you should do in your organization to make your operations more resilient against targeted attacks.

@jarnomn

 
 

 
 
Tuesday, January 15, 2013

 
Every Month is Red October Posted by SecResponse @ 12:50 GMT

By now, you've probably read the news about "Red October" and you're wondering how worried you should be? Red October is the latest AV industry case study of digital espionage. (Kaspersky Lab's post here.)

From a technical point of view Red October looks very much like any other targeted corporate espionage attack. The attackers use exploit documents with credible looking content so the victim will open the file, drop a malicious payload to the infected device, and start mining all information they can from the infected system.

It appears the exploits used were not advanced in any way. The attackers used old, well-known Word, Excel and Java exploits. So far, there is no sign of zero-day vulnerabilities being used.

Our back end systems automatically analyze document exploits. Here are screenshots of some used in the Red October attacks:

Red October

Red October

Red October

Red October

We see thousands of similar documents in our systems every month. The Red October attacks are interesting because of the large scale of the espionage done by a single entity, and the long timespan they cover. However, the sad truth is that companies and governments are constantly under similar attacks from many different sources. In that sense, this really is just everyday life on the Internet.

The currently known exploit documents used by the Red October attacks are detected by F-Secure antivirus with various detection names, including Exploit:Java/Majava.A, Exploit.CVE-2012-0158.Gen, Exploit.CVE-2010-3333.Gen, and Exploit.CVE-2009-3129.Gen.

P.S. If you are wondering what you should do as a system administrator to prevent such attacks against your environment, we'll soon have a follow-up post by Senior Researcher Jarno Niemela for you.

 
 

 
 
Monday, January 14, 2013

 
Java & IE Patches + Prompts Posted by Sean @ 17:38 GMT

Microsoft is releasing an out of cycle security update for users of Internet Explorer 6-8.

Advisory_2704220

According to Microsoft: "While we have still seen only a limited number of customers affected by the issue, the potential exists that more customers could be affected in the future."

Potential indeed — there's now evidence of this IE vulnerability being incorporated into popular exploit kits such as Blackhole. Be sure to update as soon as possible.

Java: something you should have already updated (if you still use it at all).

Here's what the CVE-2013-0422 Java (JRE) exploit looked like among our top detections last week.

java0daystats

As you can see, the exploit grew in prevalence, but remains in the middle of the pack. That is because not everybody is running the latest version of Java (7u11), and exploit kits do version checking. Thus, we still see more exploits for older versions of Java. So it's important to update to the current version!

Additionally, from Oracle: "The fixes in this Alert include a change to the default Java Security Level setting from "Medium" to "High". With the "High" setting, the user is always prompted before any unsigned Java applet or Java Web Start application is run."

Here's what the prompt looks like:

Java_7u11_prompt_unsigned

Here's the prompt of a self-signed app:

Java_7u11_prompt_signed







 
 

 
 
Obit: Aaron Swartz Posted by Sean @ 14:57 GMT

Unfortunately, I didn't know much about Aaron Swartz until after his death.

Of the things that I have now read about him, I think The Economist says it best: Remembering Aaron Swartz: Commons man

 
 

 
 
Friday, January 11, 2013

 
The Forrester Wave + Software Updater Posted by Sean @ 15:23 GMT

Our Corporate Security Business team has been making a lot of smart decisions lately (or it seems so to us anyway). And that's reflected in this quarter's The Forrester Wave. Congrats guys!

Pekka Usva

Another decision we're rather pleased to see from our corporate folks is the implementation of a "software updater" feature into our business products. Because as anybody that regularly follows this blog knows, out of date software is vulnerable software. Something that helps an admin keep stuff up to date is a good feature.

We also enjoy the marketing allegory: If your software gets old, it'll spoil.

fridge

Who hasn't stared at the contents of their refrigerator pondering what food was about to go bad?

Anyway, today, the majority of our top detections are for exploits which target known vulnerabilities. It's better not to have the vulnerabilities to begin with.

And speaking of vulnerabilities: Java, if you use it, update now to version(s) 7.0, 7u10+! Then use the security tab in the control panel to disable Java in the browser. There's an unpatched vulnerability that's being used by popular exploit kits.

Our antivirus, detects the exploit as Exploit:Java/Majava.C.

 
 

 
 
Thursday, January 10, 2013

 
On the Topic of AV Being Useless Posted by SecResponse @ 13:31 GMT

I have lately been following and participating in discussions as to whether or not antivirus products are useless and just waste of money. And as I am employed by F-Secure, my position on the matter may be rather obvious. But rather than going on with the same tired argument, I would like to shine some attention to some common patterns and misconceptions that repeat themselves in almost all discussions.

Pattern 1: Someone tries to use VirusTotal scan results as an argument.

VirusTotal is a very useful system for getting initial information about some particular sample but it does not give reliable indication about performance of various antivirus products. The folks at VirusTotal themselves know this, and they do not like their system being abused in bad research. In fact, VT has declared this for years already in their section about page. See the section called — BAD IDEA: VirusTotal for antivirus/URL scanner testing.

From VT: "At VirusTotal we are tired of repeating that the service was not designed as a tool to perform antivirus comparative analyses, but as a tool that checks suspicious samples with several antivirus solutions and helps antivirus labs by forwarding them the malware they fail to detect. Those who use VirusTotal to perform antivirus comparative analyses should know that they are making many implicit errors in their methodology, the most obvious being…" (Emphasis mine.)

The reason for this is threefold. Firstly the engines that AV vendors provide to VT are not exactly the same configuration as are in the real-world product and do not receive the same care and attention as real products do, if a sample is missing in VT�s results we do not care as much as we do for our paying customers.

Secondly no organization in its right mind would provide its most advanced technology into a comparative system where attackers can test their new creations at leisure, and try until they are able to circumvent enough scanners to their liking.

Thirdly VirusTotal does not try to execute the files with actual products being installed. This means that any run-time heuristics, behavioral monitoring, and memory scanning are out of the game. And thus the detection results are meager when compared to full products. But it is understandable why VT does not execute files, executing everything on every engine would require massive resources, and many samples would still fail due to missing components that would be present in a real infection case.

Pattern 2: Testers scan files locally that they have downloaded and unpacked (from password protected archives) from some collection and complain if some malware file is not detected.

Even when using the real product to scan such collections or forensic result files, you are still not really using the product as it is intended, scanning is only the third to last line of defense.
The antivirus industry realized years ago that there is no way it can give sufficient protection just by scanning files.
We switched our focus into trying to prevent hostile content from ever reaching the target rather than trying to detect it when it is already running in the system.

The typical antivirus product, or should I say security suite, contains multiple layers of defense of which file scanning is only small part.
What is being used varies from product to product. But the typical product has at least these layers.

1. URL/Web access filtering.

This is done to prevent users from ever coming into contact with hostile attack sites.

2. HTTP, et cetera protocol scanning.

To catch the hostile content before it reaches Web browser or other client.

3. Exploit detection.

To block the exploit before it is able to take over the client. And if the exploit is not detected as such, many products also contain measures to prevent exploits from successfully running.

4. Network ("cloud") reputation queries.

To query file or file pattern reputation from back end servers. This is the part that many people have argued should replace traditional antivirus. But actually we are already doing that as one tool in our arsenal. So it didn�t replace, but rather, enhanced existing AV.

5. Sandboxing and file based heuristics.

To catch new exploits / payloads dropped before they have a chance to execute.

6. Traditional file scanning.

This is the part many folks think of when they speak of "antivirus". Protection-wise it provides probably 15-20% of cover.

7. Memory scanning.

This detects malware that never lands to disk, or circumvents packers that we cannot handle with sandboxes or static unpacking.

8. Runtime heuristics and memory scanning.

Currently the last line of defense, to catch files that behave in a suspicious or malicious manner.

My apologies for not going into details with the various technologies, but it would make this post too long explain in detail why every layer is needed and how they work. But anyway, the point is that the fact that some threat is not detected by a scanner doesn�t mean that it wouldn�t be blocked in the case of a real attack.

Real working security is based on multiple layers of protection. And what is greatly amusing to us is that people who claim that AV is useless usually recommend some technology listed above e as a new solution. And well, we�re already doing that, but since whatever they recommended is not a complete solution, we also do need the other layers.

Pattern 3: Blacklisting is stupid. People should do white listing.

If white listing was a feasible option, do you really think we wouldn�t be utilizing it already?
Or actually, we do white listing, but only for performance improvements and false alarm avoidance.

The problem with white listing is that it deals only with executable files, and thus does not prevent the system from being infected in the first place, and if the attack resides only in memory the white list has nothing to check against.
Also, white listing does not work against exploit documents or websites, since you cannot build a white list of every clean document or website content.

Pattern 4: Antivirus should be in the net, not on the desktop.

It would be very nice to be able to offload all security onto a server somewhere and never to worry about AV hogging resources. But unfortunately this is not feasible due to the fact that computers are mobile.

With a static desktop that is never ever connected to anything else other than an office network, one could theoretically be able to do all security at the network level. But in reality most computers are laptops, which are connected from one network to another all the time and the only thing that stays constant is what is installed on the device.

Also, pure network based AV provides no protection against USB and other media based malware.

One could of course use pure "cloud" AV that is very light on the client, but that would only give you less protection as you would drop some of the protection technologies, so you end up with less capable product compared to full set.

Pattern 5: But I have this massive amount of data from our servers and we see malware gets though.

The reason for this is twofold.

First, what is seen is a portion of attacks that get through, we do not (or should not) claim to give 100% protection; we are only able to stop most of the attacks. On how well each respective product is, you can see from tests which use the actual product. Nobody gets 100% protection right all the time. Thus there will always be some attacks that get through.

Secondly a lot of corporations hamstring their AV product by preventing the network queries back to the AV�s servers, which means the AV product has to work only with local heuristics and scan engines. And this means the company is giving up layers 1 and 4 of the total defense set and is thus getting much less protection than it would if it allow all technologies to be used.

Pattern 6: AVs misses 98-100% of the malware I see.

Well, saying that AV misses 100% of the malware it misses is kind of self explanatory.

What you see is the portion that was able to get past the defenses of whatever product you or your client is using.

And we would very much appreciate if you would contact us and tell us everything you know about the attack. The malicious file alone does not give a complete enough picture for us to further develop our file scanning and behavioral heuristics.

Pattern 7: AV is useless against APT attacks.

Advanced persistent attacks are very difficult to block, and so far nobody has complete answer to them, and never will as attackers will adapt to whatever defenses you have. AV is one important layer against advanced attacks, but is not alone enough.

But then again, without AV you would have to worry about advanced attacks and all the rest that you currently are being protected from. So how does it help to advocate to not to use AV and increase your attack surface even further?

Beware people who tell you that your defenses are not perfect and thus you should get rid of them.

Regards,
@jarnomn

 
 

 
 
Wednesday, January 9, 2013

 
Versions of Internet Explorer Still Vulnerable Posted by Sean @ 10:52 GMT

One week ago, it wasn't yet clear if Microsoft would be able to quickly patch Internet Explorer's latest vulnerability.

Microsoft Security Advisory 2794220
Microsoft Security Advisory (2794220)

We now know it isn't part of January's Security Updates. This raises the possibility of an out-of-cycle patch. But then, we have yet to see more than limited exploitation. (We are currently investigating reports of targeted attacks.)

To repeat our earlier guidance:

For Windows 7, update to version 9 of Internet Explorer.

For consumers with XP, we recommend installing an additional browser such as Mozilla Firefox or Google Chrome.

For corporate folks, and other organizations, required to use XP with IE 8: Microsoft has a Fix it tool available.

Details here: Microsoft "Fix it" available for Internet Explorer 6, 7, and 8

 
 

 
 
Tuesday, January 8, 2013

 
Cool Exploit Kit is Related to Blackhole Posted by Sean @ 15:05 GMT

Two months ago, Karmina and Timo wrote about clear similarities between the Cool and Blackhole exploit kits. Blackhole seemed to be copying (or perhaps replicating is a better word) techniques and exploits used by Cool. It appeared the two were closely related.

Yesterday, Brian Krebs exposed that relationship.

krebsonsecurity.com

Cool is a premium exploit kit from the authors of Blackhole which uses custom exploits (as in zero-day) and costs $10,000 a month!

Blackhole costs "only" $700 for three months, or $1,500 for a full year. Once a custom exploit used by Cool goes public, and is thus no longer a zero-day, it gets added to the less expensive Blackhole.

Run, don't walk to Krebs on Security for all the details: Crimeware Author Funds Exploit Buying Spree

 
 

 
 
Thursday, January 3, 2013

 
On Using Fake Data to Generate Alerts Posted by Sean @ 08:50 GMT

Here's a tip of Mikko's from September:


Insert unique fake users and customers into your production databases, then set up a Google Alert to notify you if they get leaked.

And here's a Washington Post article from today:

Washington Post Jan 3rd, 2013

By Ellen Nakashima: To thwart hackers, firms salting their servers with fake data

Not a bad idea, eh?

We've also known people that make intentional spelling errors when providing personal data to companies, so that they can track how the information is used and sold.

Fake data: welcome to the future.
 
 

 
 
Wednesday, January 2, 2013

 
Fix it: Internet Explorer 8 Vulnerability Posted by Sean @ 11:12 GMT

As mentioned in our previous post, there's an Internet Explorer (zero-day) remote code execution vulnerability being exploited in the wild which affects IE 8, as well as IE 6 & 7. Those versions of IE account for about one third of all desktop browser market share.

Current exploitation is limited but it's quite likely a reliable exploit will soon find its way into crimeware exploit kits.

Microsoft Security Advisory 2794220
Microsoft Security Advisory (2794220)

IE 9 & 10 are not vulnerable — which is of small comfort to users of Windows XP as IE 9 & 10 are not supported.

For consumers with XP, we recommend installing an additional browser such as Mozilla Firefox or Google Chrome.

For corporate folks (still) required to use XP with IE 8: Microsoft has a Fix it tool available.

Microsoft Security Advisory 2794220, Fix it

You'll find more details at Microsoft's Security Research & Defense blog: Microsoft "Fix it" available for Internet Explorer 6, 7, and 8.

It's not yet clear if this vulnerability will be patched on January 8th during Microsoft's scheduled update cycle.







 
 

 
 
Targeted Zero-day Attack on CFR Site Posted by ThreatResearch @ 01:03 GMT

It looks as if some people used the day after Christmas for mischief rather than relaxation. According to a FreeBeacon report, the website for the US foreign policy group, Council on Foreign Relations (CFR), was compromised on December 26th, 2012.

Judging from the exploit HTML file apparently used in the attack, users in specific countries were being targeted, as the attacker focused their attention specifically on browsers set to use the following Windows system languages:

  •  Chinese (Taiwan)
  •  Chinese (PRC)
  •  English

The compromised site itself was reportedly cleaned shortly after the attack was detected. However, we expect the exploit to become more widely used in other online attacks now that it has been added to the Metasploit framework.

The exploit affects versions 8 and lower of the Internet Explorer browser, so users with the affected program are advised to either update their software to versions 9 or 10, or switch to other browsers.

In the meantime, Microsoft has released a security advisory providing additional details and a workaround for affected users.

—————

Updated on 2 Jan 2013: minor edit to emphasize the specific languages targeted.

Post — Wayne