Another new widespread and disastrous SSL/TLS vulnerability has been uncovered that for over a decade leftMillions of users of Apple and Android devices vulnerable to man-in-the-middle attacks on encrypted traffic when they visited supposedly ‘secured’ websites, including the official websites of the White House, FBI and National Security Agency.

Dubbed the “FREAK” vulnerability (CVE-2015-0204) – also known as Factoring Attack on RSA-EXPORT Keys – enables hackers or intelligence agencies to force clients to use older, weaker encryption i.e. also known as the export-grade key or 512-bit RSA keys.

FREAK vulnerability discovered by security researchers of French Institute for Research in Computer Science and Automation (Inria) and Microsoft, resides in OpenSSL versions 1.01k and earlier, and Apple’s Secure Transport.

90s WEAK EXPORT-GRADE ENCRYPTION

Back in 1990s, the US government attempted to regulate the export of products utilizing “strong” encryption and devices were loaded with weaker “export-grade” encryption before being shipped out of the country.

At that time, it was allowed a maximum key length of 512 bits for “export-grade” encryption. Later in 2000, with the modification of the US export laws, vendors were able to include 128-bit ciphers in their products and were able to distribute these all over the world.

The only problem is that “export-grade” cryptography support was never removed and now three decades later, FREAK vulnerability make it significantly easier for hackers to decode the website’s private key and decrypt passwords, login cookies, and other sensitive information from HTTPS connections.

HOW FREAK VULNERABILITY WORKS ?

Assistant Research Professor Matthew Green of Johns Hopkins University’s Information Security Institute in Maryland summarizes the FREAK vulnerability in a blog post detailing how a hacker could perform MitM attack:

In the client’s Hello message, it asks for a standard ‘RSA’ ciphersuite.

The MITM attacker changes this message to ask for ‘export RSA’.

The server responds with a 512-bit export RSA key, signed with its long-term key.

The client accepts this weak key due to the OpenSSL/Secure Transport bug.

The attacker factors the RSA modulus to recover the corresponding RSA decryption key.

When the client encrypts the ‘pre-master secret’ to the server, the attacker can now decrypt it to recover the TLS ‘master secret’.

From here on out, the attacker sees plain text and can inject anything it wants.

36% SSL WEBSITES VULNERABLE TO HACKERS

A scan of more than 14 million websites that support the SSL/TLS protocols found that more than 36% of them were vulnerable to the decryption attacks that support RSA export cipher suites (e.g.,TLS_RSA_EXPORT_WITH_DES40_CBC_SHA).

Cracking a 512-bit key back in the ’90s would have required access to supercomputers of that time, but today, it can be done in seven hours and cost nearly $100 per website only.

It is possible to carry out FREAK vulnerability attack when a user running a vulnerable device — currently includes Android smartphones, iPhones and Macs running Apple’s OS X operating system — connects to a vulnerable HTTPS-protected website. At the moment, Windows and Linux end-user devices were not believed to be affected.

‘FREAK’ VULNERABILITY SIMILAR TO ‘POODLE’

FREAK vulnerability is similar to last year’s POODLE flaw or Padding Oracle On Downgraded Legacy Encryption, which allowed hackers to downgrade the entire SSL/TLS Internet-communication security suite to the weakest possible version. FREAK affects only those SSL/TLS implementations that accept export versions of protocols that use the RSA encryption algorithm.

Security researchers are maintaining a list of top vulnerable websites and encourage web server administrators to disable support for export suites, including all known insecure ciphers, and enable forward secrecy.

APPLE AND GOOGLE PLANS TO FIX FREAK

Google said an Android patch has already been distributed to partners. Meanwhile, Google is also calling on all websites to disable support for export certificates.

Apple also responded to the FREAK vulnerability and released a statement that, “We have a fix in iOS and OS X that will be available in software updates next week.”

An average of 19 vulnerabilities per day were reported in 2014, according to the data from the National Vulnerability Database (NVD). The NVD provides a comprehensive list of software security vulnerabilities. In this article, I look at some of the trends and key findings for 2014 based on the NVD’s database.

Some of the questions asked are:

– What are the latest vulnerability trends? Are we seeing an increase or a decrease in the number of vulnerabilities?

– What percentage of these vulnerabilities are rated as critical? (e.g. high security impact – like allowing remote code execution – and thus easy to exploit)

– In which areas do we see the most vulnerabilities? Are operating systems, third-party applications or network devices such as routers, switches, access points or printers most at risk?

– Which operating systems and applications are listed with most vulnerabilities? This data is important because the products which are on top get the most frequent security updates. To maintain an IT infrastructure secure, sysadmins need to continually monitor these operating systems and applications for the latest updates and ensure they are always fully patched.

7,038 new security vulnerabilities were added to the NVD database in 2014. This means an average of 19 new vulnerabilities per day. The number is significantly higher than in 2013 and continues the ascending trend over the past few years.

24% of these vulnerabilities are rated as high severity. The percentage is lower than in 2013, but the actual number of high security vulnerabilities has increased compared to last year.

Third-party applications are the most important source of vulnerabilities with over 80% of the reported vulnerabilities in third-party applications. Operating systems are only responsible for 13% of vulnerabilities and hardware devices for 4%.

Top operating systems by vulnerabilities reported in 2014

It is interesting that although Microsoft operating systems still have a considerable number of vulnerabilities, they are no longer in the top 3. Apple with OS X and iOS is at the top, followed by Linux kernel.

2014 was a tough year for Linux users from a security point of view, coupled with the fact that some of the most important security issues of the year were reported for applications that usually run on Linux systems. Heartbleed, for example, is a critical security vulnerability detected in OpenSSL while Shellshock is a vulnerability that affects GNU Bash.

Top applications by vulnerabilities reported in 2014

The applications listed here are pretty much the same as in 2013. Not surprisingly at all, web browsers continue to have the most security vulnerabilities because they are a popular gateway to access a server and to spread malware on the clients. Adobe free products and Java are the main challengers but web browsers have continuously topped the table for the last six years. Mozilla Firefox had the most vulnerabilities reported in 2009 and 2012; Google Chrome in 2010 and 2011; Internet Explorer was at the top for the last two years.

]]>https://nu11p0inter.wordpress.com/2015/02/24/and-the-oscar-for-most-vulnerable-os-goes-to/feed/0talmelamednumber-of-vulnerabilities-09-14high-severity-vulnerabilitiesvulnerability-distribution-by-product-typeOS-chartapplication-chartiPhone just snapshotted your credit card numberhttps://nu11p0inter.wordpress.com/2014/12/28/iphone-just-snapshotted-your-credit-card-number/
https://nu11p0inter.wordpress.com/2014/12/28/iphone-just-snapshotted-your-credit-card-number/#respondSun, 28 Dec 2014 10:59:18 +0000http://nu11p0inter.wordpress.com/?p=739]]>Having a page where users insert their credit card numbers? a page containing sensitive data such as personal or business information? you should be aware of the fact that when the user presses the iphone’s home button, and your application performs backgrounding, iOS takes a snapshot of the current page and stores it insecurely on the device. Why? to create an “animation” when the application shrinks into the background and when selected, expands back to your screen. If the last page contained sensitive information, this information could be stolen. Violation of the user’s privacy and business information leakage are just two of the security impacts it may cause.

This is how its done:
1. The user launches your app, and goes to a page containing sensitive information.
2. The user receives a call, or decided himself to press the home button, and send your app into the background.
3. iOS takes a snapshot of the last pages, for animation… this is how it looks:

Now, lets take a look at the application folder on the device. We’ll go to:{YOUR_APP_UUID}/Library/Caches/Snapshots/ and there we can see the file:UIApplicationAutomaticSnapshotDefault-Portrait@2x.png. Opening it, will reveal all the data that appeared on the last page visited in our app, before going into background.

What can we do about it?

Well… I’m glad you asked! There are a few ways to deal with this issue. Here,I will explain four of them:

2. Mark sensitive fields as hidden

The iOS Application Programming Guide states “When your applicationDidEnterBackground: method returns, the system takes a picture of your app’s user interface and uses the resulting image for transition animations. If any views in your interface contain sensitive information, you should hide or modify those views before theapplicationDidEnterBackground: method returns.”

Choose a background that will be saved on top of the original snapshot. You can use the general theme of your application. For example:

4. Prevent Backgrounding

You can also prevent backgrounding completely, instead of trying to hide the sensitive data. To do so, set the “Application does not run in background” property in the application’s Info.plist file. This will add theUIApplicationExitsOnSuspend key to the plist. After setting this property, every time the application tries to go into backgrouding, the inapplicationWillTerminate: is being called and prevents the screenshot from being taken at all.

Summary

Sensitive data, such as Personal Information, Financial or business data, and more can be saved when an app moves to the background without the user’s knowledge. If a malicious application is installed on the same device, or if someone gets a hold of the device, even just for few minutes, This sensitive information could be easily stolen. This snapshot will remain there until a new snapshot of the same application will be taken! This is a series security issue and it needs to be mitigated by the developers. I’ve seen different apps using different solutions… just pick one of the methods I stated above and protect your users.

The following are a list of HTTP headers that will help you secure your web application in no-time:

X-Content-Type-Options

The ‘X-Content-Type-Options’ HTTP header if set to ‘nosniff’ stops the browser from guessing the MIME type of a file via content sniffing. Without this option set there is a potential increased risk of cross-site scripting.

The ‘X-XSS-Protection’ HTTP header is used by Internet Explorer version 8 and higher. Setting this HTTP header will instruct Internet Explorer to enable its inbuilt anti-cross-site scripting filter. If enabled, but without ‘mode=block’ then there is an increased risk that otherwise non exploitable cross-site scripting vulnerabilities may potentially become exploitable.

The ‘X-Frame-Options’ HTTP header can be used to indicate whether or not a browser should be allowed to render a page within a <frame> or <iframe>. The valid options are DENY, to deny allowing the page to exist in a frame or SAMEORIGIN to allow framing but only from the originating host. Without this option set the site is at a higher risk of click-jacking unless application level mitigations exist.

Secure configuration:Server returns the ‘X-Frame-Options’ HTTP header set to ‘DENY’ or ‘SAMEORIGIN’.

Cache-Control

The ‘Cache-Control’ response header controls how pages can be cached either by proxies or the user’s browser. Using this response header can provide enhanced privacy by not caching sensitive pages in the users local cache at the potential cost of performance. To stop pages from being cached the server sets a cache control by returning the ‘Cache-Control’ HTTP header set to ‘no-store’.

Secure configuration:Either the server sets a cache control by returning the ‘Cache-Control’ HTTP header set to ‘no-store, no-cache’ or each page sets their own via the ‘meta’ tag for secure connections.

Updated: The above was updated after our friend Mark got in-touch. Originally we had saidno-store was sufficient. But as with all things web related it appears Internet Explorer and Firefox work slightly differently (so everyone ensure you thank Mark!).

The ‘X-Content-Security-Policy’ response header is a powerful mechanism for controlling which sites certain content types can be loaded from. Using this response header can provide defence in depth from content injection attacks. However it’s not for the faint hearted in our opinion.

Secure configuration:Either the server sets a content security policy by returning the ‘X-Content-Security-Policy’ HTTP header or each page sets their own via the ‘meta’ tag

Note: This is a draft standard which only Firefox and Chrome support. But it is supported by sites such as PayPal. This header can only be set and honoured by web browsers over a trusted secure connection.

Secure configuration: Return the ‘Strict-Transport-Security’ header with an appropriate timeout over an secure connection.

Access-Control-Allow-Origin

The ‘Access Control Allow Origin’ HTTP header is used to control which sites are allowed to bypass same origin policies and send cross-origin requests. This allows cross origin access without web application developers having to write mini proxies into their apps.

Note: This is a draft standard which only Firefox and Chrome support, it is also advocarted by sites such as http://enable-cors.org/.

Secure configuration:Either do not set or return the ‘Access-Control-Allow-Origin’ header restricting it to only a trusted set of sites.

FireEye Research Labs identified a new Internet Explorer (IE) zero-day exploit used in targeted attacks. The vulnerability affects IE6 through IE11, but the attack is targeting IE9 through IE11. This zero-day bypasses both ASLR and DEP. The acknowledgment from Microsoft was no late to come. They has assigned CVE-2014-1776 to the vulnerability and released security advisory to track this issue.

Threat actors are actively using this exploit in an ongoing campaign which they have named “Operation Clandestine Fox.” It is recommend to apply a patch once available.

The exploit page loads a Flash SWF file to manipulate the heap layout with the common technique heap feng shui. It allocates Flash vector objects to spray memory and cover address 0×18184000. Next, it allocates a vector object that contains a flash.Media.Sound() object, which it later corrupts to pivot control to its ROP chain.

• Arbitrary memory access

The SWF file calls back to Javascript in IE to trigger the IE bug and overwrite the length field of a Flash vector object in the heapspray. The SWF file loops through the heapspray to find the corrupted vector object, and uses it to again modify the length of another vector object. This other corrupted vector object is then used for subsequent memory accesses, which it then uses to bypass ASLR and DEP.

• Runtime ROP generation

With full memory control, the exploit will search for ZwProtectVirtualMemory, and a stack pivot (opcode 0×94 0xc3) from NTDLL. It also searches for SetThreadContext in kernel32, which is used to clear the debug registers. This technique, documented here, may be an attempt to bypass protections that use hardware breakpoints, such as EMET’s EAF mitigation.

With the addresses of the aforementioned APIs and gadget, the SWF file constructs a ROP chain, and prepends it to its RC4 decrypted shellcode. It then replaces the vftable of a sound object with a fake one that points to the newly created ROP payload. When the sound object attempts to call into its vftable, it instead pivots control to the attacker’s ROP chain.

• ROP and Shellcode

The ROP payload basically tries to make memory at 0×18184000 executable, and to return to 0x1818411c to execute the shellcode.

18184140 8be5 mov esp,ebp
18184142 83ec2c sub esp,2Ch
18184145 90 nop
18184146 eb2c jmp 18184174
The shellcode calls SetThreadContext to clear the debug registers. It is possible that this is an attempt to bypass mitigations that use the debug registers.

Using EMET may break the exploit in your environment and prevent it from successfully controlling your computer. EMET versions 4.1 and 5.0 break (and/or detect) the exploit in our tests.
Enhanced Protected Mode in IE breaks the exploit in our tests. EPM was introduced in IE10.
Additionally, the attack will not work without Adobe Flash. Disabling the Flash plugin within IE will prevent the exploit from functioning.

]]>https://nu11p0inter.wordpress.com/2014/04/28/all-internet-explorer-are-vulnerable/feed/0talmelamedimagesMy HEARTBLEEDs for youhttps://nu11p0inter.wordpress.com/2014/04/09/my-heartbleeds-for-you/
https://nu11p0inter.wordpress.com/2014/04/09/my-heartbleeds-for-you/#respondWed, 09 Apr 2014 10:39:33 +0000http://nu11p0inter.wordpress.com/?p=690]]>The Heartbleed Bug is a serious vulnerability in the popular OpenSSL cryptographic software library. This weakness allows stealing the information protected, under normal conditions, by the SSL/TLS encryption used to secure the Internet. SSL/TLS provides communication security and privacy over the Internet for applications such as web, email, instant messaging (IM) and some virtual private networks (VPNs).

The Heartbleed bug allows anyone on the Internet to read the memory of the systems protected by the vulnerable versions of the OpenSSL software. This compromises the secret keys used to identify the service providers and to encrypt the traffic, the names and passwords of the users and the actual content. This allows attackers to eavesdrop on communications, steal data directly from the services and users and to impersonate services and users.

The fix from OpenSSL.org was not late to be released and specified: “A missing bounds check in the handling of the TLS heartbeat extension can be used to reveal up to 64k of memory to a connected client or server. Only 1.0.1 and 1.0.2-beta releases of OpenSSL are affected including 1.0.1f and 1.0.2-beta1.”

The official bug was assgined under CVE-2014-0160. CVE (Common Vulnerabilities and Exposures) is the Standard for Information Security Vulnerability Names maintained by MITRE.

What is being leaked?

Encryption is used to protect secrets that may harm your privacy or security if they leak. In order to coordinate recovery from this bug you can classify the compromised secrets to four categories: 1) primary key material, 2) secondary key material and 3) protected content and 4) collateral.

What is leaked primary key material and how to recover?

These are the crown jewels, the encryption keys themselves. Leaked secret keys allows the attacker to decrypt any past and future traffic to the protected services and to impersonate the service at will. Any protection given by the encryption and the signatures in the X.509 certificates can be bypassed. Recovery from this leak requires patching the vulnerability, revocation of the compromised keys and reissuing and redistributing new keys. Even doing all this will still leave any traffic intercepted by the attacker in the past still vulnerable to decryption. All this has to be done by the owners of the services.

What is leaked secondary key material and how to recover?

These are for example the user credentials (user names and passwords) used in the vulnerable services. Recovery from this leaks requires owners of the service first to restore trust to the service according to steps described above. After this users can start changing their passwords and possible encryption keys according to the instructions from the owners of the services that have been compromised. All session keys and session cookies should be invalided and considered compromised.

What is leaked protected content and how to recover?

This is the actual content handled by the vulnerable services. It may be personal or financial details, private communication such as emails or instant messages, documents or anything seen worth protecting by encryption. Only owners of the services will be able to estimate the likelihood what has been leaked and they should notify their users accordingly. Most important thing is to restore trust to the primary and secondary key material as described above. Only this enables safe use of the compromised services in the future.

What is leaked collateral and how to recover?

Leaked collateral are other details that have been exposed to the attacker in the leaked memory content. These may contain technical details such as memory addresses and security measures such as canaries used to protect against overflow attacks. These have only contemporary value and will lose their value to the attacker when OpenSSL has been upgraded to a fixed version.

Are YOU affected by the bug?

Some operating system distributions that have shipped with potentially vulnerable OpenSSL version:

Even though the actual code fix may appear trivial, OpenSSL team is the expert in fixing it properly so latest fixed version 1.0.1g or newer should be used. If this is not possible software developers can recompile OpenSSL with the handshake removed from the code by compile time option -DOPENSSL_NO_HEARTBEATS.

For more informatio go to: heartbleed.com

]]>https://nu11p0inter.wordpress.com/2014/04/09/my-heartbleeds-for-you/feed/0talmelamedimageedit_1_8145715273ScreenShot001Resident XSS (Rootkits in your Web application)https://nu11p0inter.wordpress.com/2014/01/11/resident-xss-rootkits-in-your-web-application/
https://nu11p0inter.wordpress.com/2014/01/11/resident-xss-rootkits-in-your-web-application/#commentsSat, 11 Jan 2014 19:49:58 +0000http://nu11p0inter.wordpress.com/?p=629]]>The new HTML5 was brought the internet community with a whole bunch of new features, to help us create a more dynamic and smooth client side, whether for the mobile apps or to a browser based client. Among its new feature you can find: CSS3, Drag&Drop, localStorage & sessionStorage, Streaming, Geolocation, Webworks and whatnot.

Wait! (imagine you hear the rewind sound): local & session Storage? does that mean that I can store data on the client? -Yes!
Well, it sounds like a great feature isn’t it? they all are… but are they safe? – A big fat No!

But lets concentrate in the session & local storage, shall we? What if someone store things for me? evil things… it means that my own browser will betray me and hold malicious data that could harm me! how does that work?

Lets say that the site you are using stores some of your account details into your local & session storage, so later on it could read information from your client, sparing it to send requests to the server.

In this scenario, our website stores your account#, the SessionId, your username and some other string that will be displayed on the screen when you enter the website (sounds like… exactly!).

Our website, takes the data it previously entered to the storage and displays it when the page renders. This is how the page looks normally, along with the original information stored in the sessionStorage:

(Click on the image to enlarge)

Notice that the message withing the speech bubble takes its information from the Session Storage, as can be seen in the above image, right below the page. You can access the page’s Resources tab using Chrome’s Developer tools (a.k.a Inspect Element). We can assume that the code behind it looks something like this:

Now, what if this website is vulnerable to a Reflected XSS? in that case the XSS would be active once, on the response that includes it. But what if! this website uses the client side storage to display information?

— If the attacker injected some malicious content to our sessionStorage and the website does not perform client-side encoding/ escaping (wait… client-side? yes!) then the XSS would take residence (thus, the name) inside our browser, just waiting to be called by the client.

Its also very simple to do so, all the attacker needs to know is the name of the key within the storage and then change it simply using: sessionStorage.setItem(key);

So, let go back to the example. The attacker managed to implement a code inside our sessionStorage using regular XSS. Then it will look like this:

(Click on the image to enlarge)

Notice that the Speech-of-the-day key has changed, and it now contains a script. The next time (and everytime after that) the page refreshes, the XSS will be executed:

(Click on the image to enlarge)

What can we do about it? — as I mentioned earlier, before reading content from the client-storage escape it! so when called, even if someone planted a malicious script – it won’t be executed:

(Click on the image to enlarge)

With all the cool features HTML5 has brought to us, a lot of new security issues has been introduced. An important part of them involve data stored in the client, or taken from it. Since the client should never be trusted, all the data in it, no matter if we stored it or not – should be mitigated, and we should take any precautions possible when dealing with it.

The phone posts its phone number to a HTTPS URL to request an authentication code,

the phone receives an authentication code in the text message,

the authentication code is used, again over HTTPS, to obtain a password.

These passwords are quite long and never visible to the user, making them hard to steal from a phone.

Authentication Overview

With the password, the client can log in to a XMPP-like server. For this it uses the custom SASL mechanismWAUTH-1. To log in with the phone number 1234567890, the following happens (this is only an XML representation of the protocol)

The server responds with a some challenge (here ABCDEFGHIJKLMNOP):<challenge xmlns=”urn:ietf:params:xml:ns:xmpp-sasl”> ABCDEFGHIJKLMNOP</challenge>

To respond to the challenge, the client generates a key using PBKDF2 with the user’s password, the challenge data as the salt and SHA1 as the hash function. It only uses 16 iterations of PBKDF2, which is a little low these days, but we know the password is quite long and random so this does not concern me greatly. 20 bytes from the PBKDF2 result are used as a key for RC4, which is used to encrypt and MAC 1234567890|| ABCDEFGHIJKLMNOP|| UNIX timestamp:<response xmlns=”urn:ietf:params:xml:ns:xmpp-sasl”>XXXXXXXXXXXX</response>

From now on, every message is encrypted and MACed (using HMAC-SHA1) using this key.

Mistake #1: The same encryption key in both directions

How RC4 is supposed to work? RC4 is a PRNG that generates a stream of bytes, which are XORed with the plaintext that is to be encrypted. By XORing the ciphertext with the same stream, the plaintext is recovered.

So, (A ^ X) ^ (B ^ X) = A ^ B

In other words: if we have two messages encrypted with the same RC4 key, we can cancel the key stream out!

As WhatsApp uses the same key for the incoming and the outgoing RC4 stream, we know that ciphertext byte i on the incoming stream XORed with ciphertext byte i on the outgoing stream will be equal to XORing plaintext byte i on the incoming stream with plaintext byte i of the outgoing stream. By XORing this with either of the plaintext bytes, we can uncover the other byte.

This does not directly reveal all bytes, but in many cases it will work: the first couple of messages exchanged are easy to predict from the information that is sent in plain. All messages still have a common structure, despite the binary encoding: for example, every stanza starts with 0xF8. Even if a byte is not known fully, sometimes it can be known that it must be alphanumeric or an integer in a specific range, which can give some information about the other byte.

Mistake #2: The same HMAC key in both directions

The purpose of a MAC is to authenticate messages. But a MAC by itself is not enough to detect all forms of tampering: an attacker could drop specific messages, swap them or even transmit them back to the sender. TLS counters this by including a sequence number in the plaintext of every message and by using a different key for the HMAC for messages from the server to the client and for messages from the client to the server. WhatsApp does not use such a sequence counter and it reuses the key used for RC4 for the HMAC.

When an attacker retransmits, swaps or drops messages the receiver can not notice that, except for the fact that the decryption of the message is unlikely to be a valid binary-XMPP message. However, by transmitting a message back to the sender at exactly the same place in the RC4 stream as it was originally transmitted will make it decrypt properly.

Conclusion

You should assume that anyone who is able to eavesdrop on your WhatsApp connection is capable of decrypting your messages, given enough effort. You should consider all your previous WhatsApp conversations compromised. There is nothing a WhatsApp user can do about this but except to stop using it until the developers can update it.

There are many pitfalls when developing a streaming encryption protocol. Considering they don’t know how to use a XOR correctly, maybe the WhatsApp developers should stop trying to do this themselves and accept the solution that has been reviewed, updated and fixed for more than 15 years, like TLS.

Android PoC

Decompiling this turned into a pile of garbarge. All strings are obfuscated so it’s very hard to determine which class is doing what. It appears to use some crypto API as references to certain elliptic curves and AES. This is likely Bouncy Castle, which doesn’t mean they actually use ECC or AES.

So on to the Android Emulator, where it turns out it is not following the authentication procedure as described in my previous post. The mechanism is still called WSAUTH-1, but the initial <auth> contains 45 bytes of data now:

After this message the server replies with an encrypted response (ABCDEFGHIJKLMNOP)

The first hint that this is still RC4 is that encrypted messages are of different lengths: a block cipher would require padding and would result in messages of exactly the block size.

45 bytes is also exactly the length of the data in the SASL <response>. So lets try and repeat the same trick:

The plaintext of the XXXXXXXXXXXX block is still 1234567890 || nonce || UNIX time (1234567890was the phone number, wich is still included in plain). We do not know what the nonce is in this case, but we do know 1234567890. So we could try to make a guess for the timestamp, but lets keep it simple.

As mentioned earlier, all stanzas in WhatsApp’s binary encoding start with 0xF8. The last 5 bytes are 13812in ASCII, which looks suspiciously much like the 't' attribute of the <success> message (a UNIX timestamp). The 0xBE refers to the string success in WhatsApp’s encoding.

This is very unlikely to be a coincidence, which proves that the latest version of the Android client is vulnerable.

But where did that RC4 key come from?

Apparently the server is able to generate the RC4 key using no nonce or key exchange with the client. The key is also different on each login, as otherwise the first couple of ciphertext bytes in the <auth> would always be the same.

We can suspect that the key only depends on the password, but new password is obtained during every login. The client makes an HTTPS request to WhatsApp before the fake-XMPP connection is made and yowsup-cli has a flag that will send your password to the server, which will give you a new one if the password was correct.

Conclusion

You should assume that anyone who is able to eavesdrop on your WhatsApp connection is capable of decrypting your messages, given enough effort. You should consider all your previous WhatsApp conversations compromised. There is nothing a WhatsApp user can do about this but except to stop using it until the developers can update it.

There are many pitfalls when developing a streaming encryption protocol. Considering they don’t know how to use a XOR correctly, maybe the WhatsApp developers should stop trying to do this themselves and accept the solution that has been reviewed, updated and fixed for more than 15 years, like TLS.

On Wednesday, July 24th Google launched the Chromecast. As soon as the source code hit GTV Hacker began their audit. Within a short period of time they had multiple items to look at for when their devices arrived. Then they received their Chromecasts the following day and were able to confirm that one of the bugs existed in the build Chromecast shipped with. From that point on they began building what you are now seeing as our public release package.

Exploit Package:

GTV Hacker Chromecast exploit package will modify the system to spawn a root shell on port 23. This will allow researchers to better investigate the environment as well as give developers a chance to build and test software on their Chromecasts. For the normal user this release will probably be of no use, for the rest of the community this is just the first step in opening up what has just been a mysterious stick up to this point. GTV Hacker hope that following this release the community will have the tools they need to improve on the shortfalls of this device and make better use of the hardware.

Is it really ChromeOS?

No, it’s not. GTV Hacker had a lot of internal discussion on this, and have concluded that it’s more Android than ChromeOS. To be specific, it’s actually a modified Google TV release, but with all of the Bionic / Dalvik stripped out and replaced with a single binary for Chromecast. Since the Marvell DE3005 SOC running this is a single core variant of the 88DE3100, most of the Google TV code was reused. So, although it’s not going to let you install an APK or anything, its origins: the bootloader, kernel, init scripts, binaries, are all from the Google TV.

How does the exploit work?

Lucky for GTV Hacker, Google was kind enough to GPL the bootloader source code for the device. So they could identify the exact flaw that allows us to boot the unsigned kernel. By holding down the single button, while powering the device, the Chromecast boots into USB boot mode. USB boot mode looks for a signed image at 0×1000 on the USB drive. When found, the image is passed to the internal crypto hardware to be verified, but after this process the return code is never checked! Therefore, they could execute any code at will.

The example above shows the call made to verify the image, the value stored in ret is never actually verified to ensure that the call to “VerifyImage” succeeded. From that, they were able to execute our own kernel. Hilariously, this was harder to do than our initial analysis of exploitation suggested. This was due to the USB booted kernel needing extra modifications to allow us to modify /system as well as a few other tweaks. GTV Hacker then built a custom ramdisk which, when started, began the process of modifying the system by performing the following steps:

Mount the USB drive plugged in to the chromecast.

Erase the /system partition (mtd3).

Write the new custom system image.

Reboot.

Note: /system is squashfs as opposed to normally seen EXT4/YAFFS2.

The system image installed from our package is a copy of the original with a modified /bin/clear_crash_counter binary. This binary was modified to perform its original action as well as spawn a telnet server as root.

After the above process, the only modification to the device is done to spawn a root shell. No update mitigations are performed which means that theoretically, an update could be pushed at any moment patching our exploit. Even with that knowledge, having an internal look at the device is priceless and they hope that the community will be able to leverage this bug in time.

With over seven billion cards in active use, SIMs may well be the most widely used security token in the world. Through over-the-air (OTA) updates deployed via SMS, the cards are even extensible through custom Java software. While this extensibility is rarely used so far, its existence already poses a critical hacking risk.

Cracking SIM update keys. OTA commands, such as software updates, are cryptographically-secured SMS messages, which are delivered directly to the SIM. While the option exists to use state-of-the-art AES or the somewhat outdated 3DES algorithm for OTA, many (if not most) SIM cards still rely on the 70s-era DES cipher. DES keys were shown to be crackable within days using FPGA clusters, but they can also be recovered much faster by leveraging rainbow tables similar to those that made GSM’s A5/1 cipher breakable by anyone.

To derive a DES OTA key, an attacker starts by sending a binary SMS to a target device. The SIM does not execute the improperly signed OTA command, but does in many cases respond to the attacker with an error code carrying a cryptographic signature, once again sent over binary SMS. A rainbow table resolves this plaintext-signature tuple to a 56-bit DES key within two minutes on a standard computer.

Deploying SIM malware. The cracked DES key enables an attacker to send properly signed binary SMS, which download Java applets onto the SIM. Applets are allowed to send SMS, change voicemail numbers, and query the phone location, among many other predefined functions. These capabilities alone provide plenty of potential for abuse.

In principle, the Java virtual machine should assure that each Java applet only accesses the predefined interfaces. The Java sandbox implementations of at least two major SIM card vendors, however, are not secure: A Java applet can break out of its realm and access the rest of the card. This allows for remote cloning of possibly millions of SIM cards including their mobile identity (IMSI, Ki) as well as payment credentials stored on the card.

Defenses. The risk of remote SIM exploitation can be mitigated on three layers:

Better SIM cards. Cards need to use state-of-art cryptography with sufficiently long keys, should not disclose signed plaintexts to attackers, and must implement secure Java virtual machines. While some cards already come close to this objective, the years needed to replace vulnerable legacy cards warrant supplementary defenses.

Handset SMS firewall. One additional protection layer could be anchored in handsets: Each user should be allowed to decide which sources of binary SMS to trust and which others to discard. An SMS firewall on the phone would also address other abuse scenarios including “silent SMS.”

In-network SMS filtering. Remote attackers rely on mobile networks to deliver binary SMS to and from victim phones. Such SMS should only be allowed from a few known sources, but most networks have not implemented such filtering yet. “Home routing” is furthermore needed to increase the protection coverage to customers when roaming. This would also provide long-requested protection from remote tracking.