Archive for the Network Security Category

SIM cards are the de facto trust anchor of mobile devices worldwide. The cards protect the mobile identity of subscribers, associate devices with phone numbers, and increasingly store payment credentials, for example in NFC-enabled phones with mobile wallets.

With over seven billion cards in active use, SIMs may well be the most widely used security token in the world. Through over-the-air (OTA) updates deployed via SMS, the cards are even extensible through custom Java software. While this extensibility is rarely used so far, its existence already poses a critical hacking risk.

Cracking SIM update keys. OTA commands, such as software updates, are cryptographically-secured SMS messages, which are delivered directly to the SIM. While the option exists to use state-of-the-art AES or the somewhat outdated 3DES algorithm for OTA, many (if not most) SIM cards still rely on the 70s-era DES cipher. DES keys were shown to be crackable within days using FPGA clusters, but they can also be recovered much faster by leveraging rainbow tables similar to those that made GSM’s A5/1 cipher breakable by anyone.

To derive a DES OTA key, an attacker starts by sending a binary SMS to a target device. The SIM does not execute the improperly signed OTA command, but does in many cases respond to the attacker with an error code carrying a cryptographic signature, once again sent over binary SMS. A rainbow table resolves this plaintext-signature tuple to a 56-bit DES key within two minutes on a standard computer.

Deploying SIM malware. The cracked DES key enables an attacker to send properly signed binary SMS, which download Java applets onto the SIM. Applets are allowed to send SMS, change voicemail numbers, and query the phone location, among many other predefined functions. These capabilities alone provide plenty of potential for abuse.

In principle, the Java virtual machine should assure that each Java applet only accesses the predefined interfaces. The Java sandbox implementations of at least two major SIM card vendors, however, are not secure: A Java applet can break out of its realm and access the rest of the card. This allows for remote cloning of possibly millions of SIM cards including their mobile identity (IMSI, Ki) as well as payment credentials stored on the card.

Defenses. The risk of remote SIM exploitation can be mitigated on three layers:

Better SIM cards. Cards need to use state-of-art cryptography with sufficiently long keys, should not disclose signed plaintexts to attackers, and must implement secure Java virtual machines. While some cards already come close to this objective, the years needed to replace vulnerable legacy cards warrant supplementary defenses.

Handset SMS firewall. One additional protection layer could be anchored in handsets: Each user should be allowed to decide which sources of binary SMS to trust and which others to discard. An SMS firewall on the phone would also address other abuse scenarios including “silent SMS.”

In-network SMS filtering. Remote attackers rely on mobile networks to deliver binary SMS to and from victim phones. Such SMS should only be allowed from a few known sources, but most networks have not implemented such filtering yet. “Home routing” is furthermore needed to increase the protection coverage to customers when roaming. This would also provide long-requested protection from remote tracking.

Anyone who uses Skype has consented to the company reading everything they write. The H’s associates in Germany at heise Security have now discovered that the Microsoft subsidiary does in fact make use of this privilege in practice. Shortly after sending HTTPS URLs over the instant messaging service, those URLs receive an unannounced visit from Microsoft HQ in Redmond.

A reader informed heise Security that he had observed some unusual network traffic following a Skype instant messaging conversation. The server indicated a potential replay attack. It turned out that an IP address which traced back to Microsoft had accessed the HTTPS URLs previously transmitted over Skype. Heise Security then reproduced the events by sending two test HTTPS URLs, one containing login information and one pointing to a private cloud-based file-sharing service. A few hours after their Skype messages, they observed the following in the server log:

The access is coming from systems which clearly belong to Microsoft. source: utrace

Source: Utrace They too had received visits to each of the HTTPS URLs transmitted over Skype from an IP address registered to Microsoft in Redmond. URLs pointing to encrypted web pages frequently contain unique session data or other confidential information. HTTP URLs, by contrast, were not accessed. In visiting these pages, Microsoft made use of both the login information and the specially created URL for a private cloud-based file-sharing service.

In response to an enquiry from heise Security, Skype referred them to a passage from its data protection policy:

“Skype may use automated scanning within Instant Messages and SMS to (a) identify suspected spam and/or (b) identify URLs that have been previously flagged as spam, fraud, or phishing links.”

A spokesman for the company confirmed that it scans messages to filter out spam and phishing websites. This explanation does not appear to fit the facts, however. Spam and phishing sites are not usually found on HTTPS pages. By contrast, Skype leaves the more commonly affected HTTP URLs, containing no information on ownership, untouched. Skype also sends head requests which merely fetches administrative information relating to the server. To check a site for spam or phishing, Skype would need to examine its content.

Back in January, civil rights groups sent an open letter to Microsoft questioning the security of Skype communication since the takeover. The groups behind the letter, which included the Electronic Frontier Foundation and Reporters without Borders expressed concern that the restructuring resulting from the takeover meant that Skype would have to comply with US laws on eavesdropping and would therefore have to permit government agencies and secret services to access Skype communications.

In summary, The H and heise Security believe that, having consented to Microsoft using all data transmitted over the service pretty much however it likes, all Skype users should assume that this will actually happen and that the company is not going to reveal what exactly it gets up to with this data.

LinkedIn’s mobile application has an interesting feature that allows users to view their iOS calendars within the app. However, it turns out that LinkedIn have decided to send detailed calendar entries of users to their servers. The app doesn’t only send the participant lists of meetings; it also sends out the subject, location, time of meeting and more importantly personal meeting notes, which tend to contain highly sensitive information such as conference call details and passcodes. If you have decided to opt-in to this calendar feature in iPhone, LinkedIn will automatically receive your calendar entries and will continue doing so every-time you open your LinkedIn app.

What is the problem exactly?
LinkedIn’s app collects full meeting details from one’s iOS calendar, which contains sensitive information such as meeting notes. While accessing this information locally by the app is not a problem by itself, this information is collected and transmitted to LinkedIn’s servers; moreover, this action is currently performed without a clear indication from the app to the user, thus possibly violating Apple’s privacy guidelines (section 17.1: “Apps cannot transmit data about a user without obtaining the user’s prior permission and providing the user with access to information about how and where the data will be used”). The biggest problematic factor lies in the fact that most of the transmitted information is not required for the app’s functionality, as described later on.

What information is being collected and sent to LinkedIn’s servers?
Every time you launch LinkedIn’s app for iPhone, it automatically sends out all of your calendar entries for a five-days time frame. The meetings information is being collected from all the calendars on the iOS machine, thus possibly exposing information from both personal and corporate calendar accounts.
Calendar meeting that are being sent out contain: meeting title, organizer and attendees, location, time and meeting notes. It should be noted that the names and email addresses of the meeting organizer and attendees are collected even for those who do not have a LinkedIn account.

As an example, creating a financial results meeting in your private calendar leads to data leakage to LinkedIn during your normal usage of the LinkedIn app. Below you can find the actual leaked data, which was acquired by analyzing the traffic the app generates.

Calendar details traffic leakage

Is the collected information needed for LinkedIn’s functionality?
Not that we are aware of. In order to implement their acclaimed feature of synchronizing between the people you meet and their LinkedIn profile, all LinkedIn need is unique identifiers of the people you are going to meet with, not all the details of your planned meetings; details such as meeting schedule, location, title or notes, which tend to be sensitive in particular for organizations, are irrelevant for this task.

What should LinkedIn do?
In order to achieve its desired functionality, the LinkedIn app should refrain from sending full meeting details to their servers. Instead, the app should communicate to LinkedIn’s servers only a small relevant subset such as the attendees of the meeting. In a matter of fact, the users’ privacy can be further improved by sending-over hashed versions of the contacts data instead of the raw contacts data, thus preserving a better privacy model.
In addition, we believe the app should clearly communicate to its users the kind of information it sends back to LinkedIn’s servers.
We have communicated the aforementioned to LinkedIn, and understand it is being examined by their Risk and Privacy Operations team. However, at the time of writing this post, the issue is still not fixed.

What should Apple do?
On a more strategic level, there may be additional mobile applications that extract sensitive calendar details and then transmit them out of the device. At the moment, such applications may be able to do so without a clear indication to the user, thus possibly violating Apple’s privacy guidelines. Therefore, we believe Apple could improve their screening process by leveraging static analysis technologies to detect such violations and better enforce its privacy policy on submitted applications.

Have LinkedIn used the acquired information in a bad way? Did LinkedIn have bad intentions?
To the best of our knowledge and based on LinkedIn’s good reputation and leadership in the market, we do not believe it utilized the collected information in a malicious way. However, we are concerned by the fact it collects and sends-out sensitive information about its users, without a clear indication and consent.

How can I verify I’m not affected by this privacy leak?

The following instructions cover the actions that need to be done to verify your calendar(s) information is not being transmitted to LinkedIn’s servers. These instructions apply for the most updated iPhone LinkedIn version. Similar actions can be applied for the iPad version of the app.
1. Click on the LinkedIn icon in the upper left part of the screen
2. Click on the “You” view
3. Click on the settings icon in the upper right part of the screen
4. Click on the “Add Calendar” option in the Settings page
5. Toggle off the “Add Your Calendar” option.

Flame is a sophisticated attack toolkit, which is a lot more complex than Duqu. It is a backdoor, a Trojan, and it has worm-like features, allowing it to replicate in a local network and on removable media if it is commanded so by its master.

The initial point of entry of Flame is unknown – we suspect it is deployed through targeted attacks; however, we haven’t seen the original vector of how it spreads. We have some suspicions about possible use of the MS10-033 vulnerability, but we cannot confirm this now.

Once a system is infected, Flame begins a complex set of operations, including sniffing the network traffic, taking screenshots, recording audio conversations, intercepting the keyboard, and so on. All this data is available to the operators through the link to Flame’s command-and-control servers.

Later, the operators can choose to upload further modules, which expand Flame’s functionality. There are about 20 modules in total and the purpose of most of them is still being investigated.

How sophisticated is Flame?

First of all, Flame is a huge package of modules comprising almost 20 MB in size when fully deployed. Because of this, it is an extremely difficult piece of malware to analyze. The reason why Flame is so big is because it includes many different libraries, such as for compression (zlib, libbz2, ppmd) and database manipulation (sqlite3), together with a LUA virtual machine.

LUA is a scripting (programming) language, which can very easily be extended and interfaced with C code. Many parts of Flame have high order logic written in LUA – with effective attack subroutines and libraries compiled from C++.

The effective LUA code part is rather small compared to the overall code. Our estimation of development ‘cost’ in LUA is over 3000 lines of code, which for an average developer should take about a month to create and debug.

Also, there are internally used local databases with nested SQL queries, multiple methods of encryption, various compression algorithms, usage of Windows Management Instrumentation scripting, batch scripting and more.

Running and debugging the malware is also not trivial as it’s not a conventional executable application, but several DLL files that are loaded on system boot.

Overall, we can say Flame is one of the most complex threats ever discovered.

How is this different to or more sophisticated than any other backdoor Trojan? Does it do specific things that are new?

First of all, usage of LUA in malware is uncommon. The same goes for the rather large size of this attack toolkit. Generally, modern malware is small and written in really compact programming languages, which make it easy to hide. The practice of concealment through large amounts of code is one of the specific new features in Flame.

The recording of audio data from the internal microphone is also rather new. Of course, other malware exists which can record audio, but key here is Flame’s completeness – the ability to steal data in so many different ways.

Another curious feature of Flame is its use of Bluetooth devices. When Bluetooth is available and the corresponding option is turned on in the configuration block, it collects information about discoverable devices near the infected machine. Depending on the configuration, it can also turn the infected machine into a beacon, and make it discoverable via Bluetooth and provide general information about the malware status encoded in the device information.

What are the notable info-stealing features of Flame?

Although we are still analyzing the different modules, Flame appears to be able to record audio via the microphone, if one is present. It stores recorded audio in compressed format, which it does through the use of a public-source library.

Recorded data is sent to the C&C through a covert SSL channel, on a regular schedule. We are still analyzing this; more information will be available on our website soon.

The malware has the ability to regularly take screenshots; what’s more, it takes screenshots when certain “interesting” applications are run, for instance, IM’s. Screenshots are stored in compressed format and are regularly sent to the C&C server – just like the audio recordings.

We are still analyzing this component and will post more information when it becomes available.

When was Flame created?

The creators of Flame specially changed the dates of creation of the files in order that any investigators couldn’t establish the truth re time of creation. The files are dated 1992, 1994, 1995 and so on, but it’s clear that these are false dates.

We consider that in the main the Flame project was created no earlier than in 2010, but is still undergoing active development to date. Its creators are constantly introducing changes into different modules, while continuing to use the same architecture and file names. A number of modules were either created of changed in 2011 and 2012.

According to our own data, we see use of Flame in August 2010. What’s more, based on collateral data, we can be sure that Flame was out in the wild as early as in February to March 2010. It’s possible that before then there existed earlier version, but we don’t have data to confirm this; however, the likelihood is extremely high.

Why is it called Flame? What is the origin of its name?

The Flame malware is a large attack toolkit made up of multiple modules. One of the main modules was named Flame – it’s the module responsible for attacking and infecting additional machines.

Who is responsible?

There is no information in the code or otherwise that can tie Flame to any specific nation state. So, just like with Stuxnet and Duqu, its authors remain unknown.

Why are they doing it?

To systematically collect information on the operations of certain nation states in the Middle East, including Iran, Lebanon, Syria, Israel and so on. Here’s a map of the top 7 affected countries:

Is Flame targeted at specific organizations, with the goal of collecting specific information that could be used for future attacks? What type of data and information are the attackers looking for?

From the initial analysis, it looks like the creators of Flame are simply looking for any kind of intelligence – e-mails, documents, messages, discussions inside sensitive locations, pretty much everything. We have not seen any specific signs indicating a particular target such as the energy industry – making us believe it’s a complete attack toolkit designed for general cyber-espionage purposes.

Of course, like we have seen in the past, such highly flexible malware can be used to deploy specific attack modules, which can target SCADA devices, ICS, critical infrastructure and so on.

Was this made by the Duqu/Stuxnet group? Does it share similar source code or have other things in common?

In size, Flame is about 20 times larger than Stuxnet, comprising many different attack and cyber-espionage features. Flame has no major similarities with Stuxnet/Duqu.

What is Wiper and does it have any relation to Flame? How is it destructive and was it located in the same countries?

The Wiper malware, which was reported on by several media outlets, remains unknown. While Flame was discovered during the investigation of a number of Wiper attacks, there is no information currently that ties Flame to the Wiper attacks. Of course, given the complexity of Flame, a data wiping plugin could easily be deployed at any time; however, we haven’t seen any evidence of this so far.

Additionally, systems which have been affected by the Wiper malware are completely unrecoverable – the extent of damage is so high that absolutely nothing remains that can be used to trace the attack.

There is information about Wiper incidents only in Iran. Flame was found by us in different countries of the region, not only Iran.

Functionality/Feature Questions about the Flame Malware

What are the ways it infects computers? USB Sticks? Was it exploiting vulnerabilities other than the print-spooler to bypass detection? Any 0-Days?

Flame appears to have two modules designed for infecting USB sticks, called “Autorun Infector” and “Euphoria”. We haven’t seen them in action yet, maybe due to the fact that Flame appears to be disabled in the configuration data. Nevertheless, the ability to infect USB sticks exists in the code, and it’s using two methods:

Autorun Infector: the “Autorun.inf” method from early Stuxnet, using the “shell32.dll” “trick”. What’s key here is that the specific method was used only in Stuxnet and was not found in any other malware since.

Euphoria: spread on media using a “junction point” directory that contains malware modules and an LNK file that trigger the infection when this directory is opened. Our samples contained the names of the files but did not contain the LNK itself.

In addition to these, Flame has the ability to replicate through local networks. It does so using the following:

The printer vulnerability MS10-061 exploited by Stuxnet – using a special MOF file, executed on the attacked system using WMI.

Remote jobs tasks.

When Flame is executed by a user who has administrative rights to the domain controller, it is also able to attack other machines in the network: it creates backdoor user accounts with a pre-defined password that is then used to copy itself to these machines.

At the moment, we haven’t seen use of any 0-days; however, the worm is known to have infected fully-patched Windows 7 systems through the network, which might indicate the presence of a high risk 0-day.

Can it self-replicate like Stuxnet, or is it done in a more controlled form of spreading, similar to Duqu?

The replication part appears to be operator commanded, like Duqu, and also controlled with the bot configuration file. Most infection routines have counters of executed attacks and are limited to a specific number of allowed attacks.

Why is the program several MBs of code? What functionality does it have that could make it so much larger than Stuxnet? How come it wasn’t detected if it was that big?

The large size of the malware is precisely why it wasn’t discovered for so long. In general, today’s malware is small and focused. It’s easier to hide a small file than a larger module. Additionally, over unreliable networks, downloading 100K has a much higher chance of being successful than downloading 6MB.

Flame’s modules together account for over 20MB. Much of these are libraries designed to handle SSL traffic, SSH connections, sniffing, attack, interception of communications and so on. Consider this: it took us several months to analyze the 500K code of Stuxnet. It will probably take year to fully understand the 20MB of code of Flame.

Does Flame have a built-in Time-of-Death like Duqu or Stuxnet ?

There are many different timers built-in into Flame. They monitor the success of connections to the C&C, the frequency of certain data stealing operations, the number of successful attacks and so on. Although there is no suicide timer in the malware, the controllers have the ability to send a specific malware removal module (named “browse32”), which completely uninstalls the malware from a system, removing every single trace of its presence.

<Taken from Kaspersky, The Flame: Q&As>

Update 1 (28-May-2012):

According to Kaspersky analysis, the Flame malware is the same as “SkyWiper”, described by the CrySyS Laband by Iran Maher CERT group where it is called “Flamer”.

Researchers have discovered a serious weakness in virtually all websites protected by the secure sockets layer protocol that allows attackers to silently decrypt data that’s passing between a webserver and an end-user browser.

The vulnerability resides in versions 1.0 and earlier of TLS, or transport layer security, the successor to the secure sockets layer technology that serves as the internet’s foundation of trust. Although versions 1.1 and 1.2 of TLS aren’t susceptible, they remain almost entirely unsupported in browsers and websites alike, making encrypted transactions on PayPal, GMail, and just about every other website vulnerable to eavesdropping by hackers who are able to control the connection between the end user and the website he’s visiting.

At the Ekoparty security conference in Buenos Aires later this week, researchers Thai Duong and Juliano Rizzo plan to demonstrate proof-of-concept code called BEAST, which is short for Browser Exploit Against SSL/TLS. The stealthy piece of JavaScript works with a network sniffer to decrypt encrypted cookies a targeted website uses to grant access to restricted user accounts. The exploit works even against sites that use HSTS, or HTTP Strict Transport Security, which prevents certain pages from loading unless they’re protected by SSL.

The demo will decrypt an authentication cookie used to access a PayPal account, Duong said. Two days after this article was first published, Google released a developer version of its Chrome browser designed to thwart the attack. Update here.
Like a cryptographic Trojan horse

The attack is the latest to expose serious fractures in the system that virtually all online entities use to protect data from being intercepted over insecure networks and to prove their website is authentic rather than an easily counterfeited impostor. Over the past few years, Moxie Marlinspike and other researchers have documented ways of obtaining digital certificates that trick the system into validating sites that can’t be trusted.

Earlier this month, attackers obtained digital credentials for Google.com and at least a dozen other sites after breaching the security of disgraced certificate authority DigiNotar. The forgeries were then used to spy on people in Iran accessing protected GMail servers.

By contrast, Duong and Rizzo say they’ve figured out a way to defeat SSL by breaking the underlying encryption it uses to prevent sensitive data from being read by people eavesdropping on an address protected by the HTTPs prefix.

“BEAST is different than most published attacks against HTTPS,” Duong wrote in an email. “While other attacks focus on the authenticity property of SSL, BEAST attacks the confidentiality of the protocol. As far as we know, BEAST implements the first attack that actually decrypts HTTPS requests.”

Duong and Rizzo are the same researchers who last year released a point-and-click tool that exposes encrypted data and executes arbitrary code on websites that use a widely used development framework. The underlying “cryptographic padding oracle” exploited in that attack isn’t an issue in their current research.

Instead, BEAST carries out what’s known as a plaintext-recovery attack that exploits a vulnerability in TLS that has long been regarded as mainly a theoretical weakness. During the encryption process, the protocol scrambles block after block of data using the previous encrypted block. It has long been theorized that attackers can manipulate the process to make educated guesses about the contents of the plaintext blocks.

If the attacker’s guess is correct, the block cipher will receive the same input for a new block as for an old block, producing an identical ciphertext.

At the moment, BEAST requires about two seconds to decrypt each byte of an encrypted cookie. That means authentication cookies of 1,000 to 2,000 characters long will still take a minimum of a half hour for their PayPal attack to work. Nonetheless, the technique poses a threat to millions of websites that use earlier versions of TLS, particularly in light of Duong and Rizzo’s claim that this time can be drastically shortened.

In an email sent shortly after this article was published, Rizzo said refinements made over the past few days have reduced the time required to under 10 minutes.

“BEAST is like a cryptographic Trojan horse – an attacker slips a bit of JavaScript into your browser, and the JavaScript collaborates with a network sniffer to undermine your HTTPS connection,” Trevor Perrin, an independent security researcher, wrote in an email. “If the attack works as quickly and widely as they claim it’s a legitimate threat.”

The source code of the notorious SpyEye toolkit has been leaked, fueling speculation that one of the largest criminal malware families could become an even bigger threat.

SpyEye, which surfaced in late 2009 and immediately started to compete against users of the Zeus banking malware toolkits, targets account credentials and other sensitive data. Leaking the SpyEye source code gives security researchers valuable information about the malware and the techniques of the code writers, but it also opens the door for other cybercriminals to create new variants and attack techniques.

It’s anyone’s guess how cybercriminals will respond to the leaked SpyEye code. Since the source code of the Zeus attack toolkit was leaked in March, researchers at Damballa Inc. have been tracking dozens of new Zeus bot operators, said Sean Bodmer, a senior threat intelligence analyst at Damballa. In addition, researchers have discovered merged code, showing malware variants with SpyEye and Zeus characteristics.

“Now that SpyEye has been ousted, it is only a matter of time before this becomes a much larger malware threat than any we have seen to date,” Bodmer wrote in the company blog. “For the next few months, please hold onto your seats people… this ride is about to get very interesting.”

The source of the leaked SpyEye code was a French researcher with a penchant for leaking information that illustrates coding techniques. Bodmer described the leak as a blow to the underground criminal ecosystem. SpyEye was bought and sold on the black market for as much as $10,000. Users of the toolkit could only use it on one machine, but also could subscribe to software updates, making attacks more relevant.

Accompanying the source code is a tutorial, making it easier for anyone to use the toolkit. The crack eliminates attribution, making it more difficult for researchers to use an operator’s name to trace new malware variants to the command-and-control infrastructure, Bodmer said. Most toolkits embed a handle within the malware agent. Damballa has already identified new SpyEye toolkits in use that have an eliminated attribution field.

“In less than 12 hours … cybercriminals are utilizing the silver platter they have been handed,” he said.

Bodmer added that the tutorial allowed him to remove any attribution to the SpyEye builder itself in less than 15 minutes.

The authors of the SpyEye toolkit have been in a battle with researchers. In March, theowners of SpyEye directed their toolkits to target a white hat website using a distributed denial-of-service DDoS plug-in. The targeted website, abuse.ch, provides free feeds of known Zeus and SpyEye command-and-control servers and IP addresses. The lists are used in blacklists to deny communication to those malicious IP addresses and cripple the bots.

A number of security vendors have documented an increase in SpyEye activity in the last six months. It’s estimated that 60% of the SpyEye bots are targeting banks in the United States and 53% are targeting U.K. financial institutions, according to a recent report issued by security vendor Trusteer Inc.

The TDSSs

The malware detected by Kaspersky Anti-Virus as TDSS is the most sophisticated threat today. TDSS uses a range of methods to evade signature, heuristic, and proactive detection, and uses encryption to facilitate communication between its bots and the botnet command and control center. TDSS also has a powerful rootkit component, which allows it to conceal the presence of any other types of malware in the system.

Its creator calls this program TDL. Since it first appeared in 2008, malware writers have been perfecting their creation little by little. By 2010, the latest version was TDL-3, which was discussed in depth in an article published in August 2010.

The creators of TDSS did not sell their program until the end of 2010. In December, when analyzing a TDSS sample, we discovered something odd: a TDL-3 encrypted disk contained modules of another malicious program, SHIZ.

TDL-3 encrypted disk with SHIZ modules

At that time, a new affiliate program specializing in search engine redirects had just emerged on the Internet; it belonged to the creators of SHIZ, but used TDL-3.

The changes that had been made to the TDL-3 configuration and the emergence of a new affiliate marketing program point to the sale of TDL-3 source code to cybercriminals who had previously been engaged in the development of SHIZ malware.

Why did the creators of TDL decide to sell source code of the third version of their program? The fact is that by this time, TDL-4 had already come out. The cybercriminals most likely considered the changes in version 4 to be significant enough that they wouldn’t have to worry about competition from those who bought TDL-3.

In late 2010, Vyacheslav Rusakov wrote a piece on the latest version of the TDSS rootkit focusing on how it works within the operating system. This article will take a closer look at how TDL-4 communicates with the network and uploads data to the botnet, which numbered over 4.5 million infected computers at the time of writing.

Yet another affiliate program

The way in which the new version of TDL works hasn’t changed so much as how it is spread – via affiliates. As before, affiliate programs offer a TDL distribution client that checks the version of the operating system on a victim machine and then downloads TDL-4 to the computer.

Affiliates spreading TDL

Affiliates receive between $20 to $200 for every 1,000 installations of TDL, depending on the location of the victim computer. Affiliates can use any installation method they choose. Most often, TDL is planted on adult content sites, bootleg websites, and video and file storage services.

The changes in TDL-4 affected practically all components of the malware and its activity on the web to some extent or other. The malware writers extended the program functionality, changed the algorithm used to encrypt the communication protocol between bots and the botnet command and control servers, and attempted to ensure they had access to infected computers even in cases where the botnet control centers are shut down. The owners of TDL are essentially trying to create an ‘indestructible’ botnet that is protected against attacks, competitors, and antivirus companies.

The ‘indestructible’ botnet

Encrypted network connections

One of the key changes in TDL-4 compared to previous versions is an updated algorithm encrypting the protocol used for communication between infected computers and botnet command and control servers. The cybercriminals replaced RC4 with their own encryption algorithm using XOR swaps and operations. The domain names to which connections are made and the bsh parameter from the cfg.ini file are used as encryption keys.

Readers may recall that one of the distinguishing features of malware from the TDSS family is a configuration file containing descriptions of the key parameters used by various modules to maintain activity logs and communications with command and control servers.

Example of configuration file content

Compared to version 3, there are only negligible changes to the format of the configuration file. The main addition is the bsh parameter, an identifier which identifies the copy of the malware, and which is provided by the command and control sever the first time the bot connects. This identifier acts as one of the encryption keys for subsequent connections to the command and control server.

Part of the code modified to work with the TDL-4 protocol.

Upon protocol initialization, a swap table is created for the bot’s outgoing HTTP requests. This table is activated with two keys: the domain name of the botnet command and control server, and the bsh parameter. The source request is encrypted and then converted to base64. Random strings in base64 are prepended and appended to the received message. Once ready, the request is sent to the server using HTTPS.

The new protocol encryption algorithm for communications between the botnet control center and infected machines ensures that the botnet will run smoothly, while protecting infected computers from network traffic analysis, and blocking attempts of other cybercriminals to take control of the botnet.

An antivirus of its own

Just like Sinowal, TDL-4 is a bootkit, which means that it infects the MBR in order to launch itself, thus ensuring that malicious code will run prior to operating system start. This is a classic method used by downloaders which ensures a longer malware lifecycle and makes it less visible to most security programs.

TDL nimbly hides both itself and the malicious programs that it downloads from antivirus products. To prevent other malicious programs not associated with TDL from attracting the attention of users of the infected machine, TDL-4 can now delete them. Not all of them, of course, just the most common.

TDSS module code which searches the system registry for other malicious programs

TDSS contains code to remove approximately 20 malicious programs, including Gbot, ZeuS, Clishmic, Optima, etc. TDSS scans the registry, searches for specific file names, blacklists the addresses of the command and control centers of other botnets and prevents victim machines from contacting them.

This ‘antivirus’ actually helps TDSS; on the one hand, it fights cybercrime competition, while on the other hand it protects TDSS and associated malware against undesirable interactions that could be caused by other malware on the infected machine.

Which malicious programs does TDL-4 itself download? Since the beginning of this year, the botnet has installed nearly 30 additional malicious programs, including fake antivirus programs, adware, and the Pushdo spambot.

TDSS downloads

Notably, TDL-4 doesn’t delete itself following installation of other malware, and can at any time use the r.dll module to delete malware it has downloaded.

Command and control server statistics

Despite the steps taken by cybercriminals to protect the command and control centers, knowing the protocol TDL-4 uses to communicate with servers makes it possible to create specially crafted requests and obtain statistics on the number of infected computers. Kaspersky Lab’s analysis of the data identified three different MySQL databases located in Moldova, Lithuania, and the USA, all of which supported used proxy servers to support the botnet.

According to these databases, in just the first three months of 2011 alone, TDL-4 infected 4,524,488 computers around the world.

Distribution of TDL-4 infected computers by country

Nearly one-third of all infected computers are in the United States. Going on the prices quoted by affiliate programs, this number of infected computers in the US is worth $250,000, a sum which presumably made its way to the creators of TDSS. Remarkably, there are no Russian users in the statistics. This may be explained by the fact that affiliate marketing programs do not offer payment for infecting computers located in Russia.