Pages

Monday, September 7, 2015

You can insert MD5 collision blocks in many data formats and if you do it right the result will be 2 objects that differ but which when passed through MD5 have the same hash value.MD5 hashes were used as a unique fingerprints in openssh right up until version 6.8 released 2015-03-18. The idea behind public key fingerprints is that as a client you should be able to uniquely identify the public key of a server you are connecting to. Your client will show you the fingerprint of the public key and hopefully you will recognise it. To aid this you can set your ssh client to draw you a pretty piece of random art based on the hash giving you a visual clue if something has changed.

That is an example of connecting to github. You can check that fingerprint here.MD5 is no longer considered collision resistant and in fact it is relatively simple to insert collisions in any binary format. This means you can create two ssh public keys that will have the same fingerprint in MD5 and same random art image. All we need to do is take a pre generated ssh public key cut out 128 bytes out of the modulus and replace it with a pair of collision blocksHere are a couple of 2048 bit public key files to which this technique has been applied:

To prove they are not the same try replacing MD5 in the above command with anther hash algorithm like SHA1 or SHA256.The problem is after messing with the moduli they almost certainly don't have two prime factors of length ~ root n and even if they did I have no idea what it might be without trying to factor them which is hard. So effectively they now have no private key which makes them useless. They do in fact have trival factors which makes them doubly useless.

Is there a way to insert a collision into a key and still maintain basic RSA functionality?

Well it turns out this at least partially quite simple too. You see following our collision blocks we can put what ever additional value of the modulus we like as long as it's the same in both keys. So if we imagine the modulus being made up of three parts concatenated together A|B|C, A and B are essentially fixed but we can adjust C until we find an n which meets the RSA requirement that n = A|B|C = p .q Where p and q are prime and of length ~ half that of n. The best way I found to do this is to generate a random C, generate a random prime p of the correct length, divide our n made up of A|B|C by p to get the nearest integer q. Then test q to see is prime. If is we can move on however it is very unlikely that A|B|C will divide exactly by p and q without remainder. But if we multiply our p and q together they will quite likely give us a new n which can be written in the form A|B|D where D is a new number. This n still has the collision blocks B in it and we have the two prime factors p and q. The whole process takes a few seconds to run on a normal laptop. It helps to have a larger public key and so a longer C to mess around with so I have used 4096 bit public keys.With p and q we can reconstruct the ssh private key to match public key we just created.

What about the other private key?Here we face a pretty much insurmountable problem. The value at the end of the modulus following the collision blocks needs to be the same in both public keys. However, in order to be able to build a private key for the modulus we would need to know two prime factors the likelihood of finding two is extremely small.

So this attack is impractical?

Yes as far as I can tell collisions in ssh keys of this sort don't throw up any security concerns. Hopefully you are all running newer versions of openssh and now comparing keys with sha256 anyway.

Tuesday, July 7, 2015

In June I gave talks at both BSides London and the Dutch PHP Conference on the subject of hash functions. I really enjoyed both experences and they both helped me improve my public talking.At Bsides London I was part of the rookie track which meant I got an excellent mentor in the shape of Yiannis Chrysanthou. It also meant I only had 15 minutes to cram something interesting into which was good as it made me concentrate on the important parts

The Dutch PHP Conference or DPC is a conference I have attended twice before. This year there was no way I could go not as a speaker as they covered travel and accommodation as well as entry to the conference. It was again an excellent conference with great organisation and venue.

Wednesday, July 1, 2015

About a month ago I read this excellent piece of work https://blog.benjojo.co.uk/post/auditing-github-users-keys . My first reaction was kick myself for not thinking of it before. It reminded me of this paper https://factorable.net/paper.html and associated the presentation which is pretty special. One of the major tools used in that paper was use of a batch version of the Greatest Common Divisor algorithm that can efficiently find common factors in large numbers of semi primes. Common primes should never happen in correctly functioning ssh key generation. However they are known to happen when things go wrong. The batchGCD algorithm didn't seem to have been used to examine the keys in the database. I was in good company in spotting this possible extension.

Lots of bad SSH keys on Github, and nobody has even run a batch GCD yet. This is a big deal. https://t.co/HcEcHgES9V

At the time I expected to see people reporting the discovery of further bad keys discovered using batchGCD over the next few days. A few days latter I came across http://cryptosense.com/batch-gcding-github-ssh-keys/ which looks like exactly that just in a more restrained writing style. In the mean time I have been having a poke around myself.Here are some observations I have made from my experiments with the keys on github and uploading new keys.

There is now a minimum key length of 1024 bits enforced when submitting keys with a suggestion of 2048 bits

There are no keys below 1024 bits still in the database all shorted ones have been removed.

You cannot add a public key from a pair generated with the debian bug. If you do you will see this message

There are very few obviously bad keys (e.g. with trivial moduls factorisation) on github. I found a handful all very old.

I could not find any keys with common divisors in the database. It looks like somebody has been through with batchGCD notified github and they have revoked them

New keys added are not checked to see if they have a common divisor but presumably will eventually get discovered

Github seems to receive a lot of spam signup these days where users are listed in the api but deleted for viewing

Tuesday, June 16, 2015

The Greatest Common Divisor (GCD) method can quickly find any common denominator between two numbers. This even works on very large numbers such as those used by RSA. If you want to try it you can write a method in a few lines of code to find the factors between these two 1024 bit numbers :

If you can find a common factor you can of course find the other factors of each number by simple division.

There is such a thing as the batch GCD algorithm http://facthacks.cr.yp.to/batchgcd.html. This was used in 2012 to test millions of deployed RSA keys. That obviously took some time and computing hardware. You should be able to run this challenge in a few minutes on a laptop.This file https://s3-eu-west-1.amazonaws.com/batchgcd/10000_ssh_pub_keys.gz has a list of 10^4 ssh public keys of various lengths. Hidden amongst them are two which share a common factor. Your task is find them with the batch GCD algorithm. You can either implement your own batchGCD from the algorithm as described in the article or there are a few implementations out there. Once you have identified the two public keys and their prime factors you have all the information you need to reconstruct the ssh private keys.With one of the private keys sign the unzipped file. This is not something you normally do with a ssh key but is possible with.openssl dgst -sha256 -hex -sign reconstructed-private-key 10000_ssh_pub_keysIf you want you can email the output to: batchGCD at fishtrap dot co dot uk for a final check that you have constructed the keys correctly.

Wednesday, May 6, 2015

One question I was asked when I demo'd creating two PHP files with the same hash is; does it work on compiled binaries?Well the answer is yes in fact that is where I first got the idea from, in this demo.That example uses a C program as both the target and also to do the binary manipulation, which slightly of obscures the way it works. It also makes use of an old very slow implementation on the Wang attack to generate the collisions. To better and more quickly show how it works for an upcoming talk I have created a really simple example using a PHP script to manipulate a binary once compiled.I have put all the code used here on github.Below is the super simple C program it compares two strings and if they don't match it prints out an ASCII art picture of an angel. If they do match you get a picture of a devil.
It can be compiled with gcc and executed simply by doing

longEgg$ gcc -o demo ./demo.c
longEgg$ chmod a+x demo
longEgg$ ./demo

Executing the program will print out the angel since the two strings differ in the last letter.Now we have our compiled binary we need to do a bit of mucking about with it. What we are going to do is insert a MD5 collision into the long string of A's of the dummy text. We only need to insert two blocks of 64 bytes but we need to insert it at the beginning of a block i.e. when the byte length is a multiple of 64 bytes.When we run the php script over it the first time it finds such a location and calculates the value of the four chaining variables of MD5 at that point in the file. It prints out the hex values concatenated together as a hash. We can now take that value and search for an MD5 collision with that initial state. The best MD5 collision finder is Marc Stevens fastcoll. It can typically find collisions in a couple of seconds using a a variant of the Wang attack. After downloading it you will need to compile it. There should be a Makefile for it in the code on github. Running it specifying the initial state and output files is shown below.

The -o option specifies the output files and so will create two new files a and b which contain 2 blocks of binary data. These blocks only work as MD5 collisions within the binary at that point. Running the php script for a second time will create two copies of the original compiled binary with the collisions inserted in the appropriate places.

longEgg$ I want to replace 128 bytes at position 6528 in colliding_binaries/demo
longEgg$ Chaining variable up to that point is c15cfe39c40e47f5b8ae31e6658fd1bd
longEgg$ Just output new file /Users/nmchugh/longEgg/devil with hash dea9dc288b6c56626997ce86ca8eb6da
longEgg$ Just output new file /Users/nmchugh/longEgg/angel with hash dea9dc288b6c56626997ce86ca8eb6da

So now we have created two more files angel and devil. Running each of those should give different outputs.

This thing got long and can basically be summarised as:

Whats with the weird title?

The strange sentence:"THE MAGIC WORDS ARE SQUEAMISH OSSIFRAGE" is the plain text solution to many cryptographic challenges. I first saw it appear one character at a time from a CBC padding attack, and I started googling it. The origin of the sentence lie with a challenge set by the authors of the RSA encryption algorithm in 1977. RSA was introduced to the world in this article in Scientific American in 1977 . In that article a message encrypted with a 129 decimal digit public key is given and an $100 prize is offered to anyone who can decrypt it.

At the time it was estimated that by the methods known at the time it would take millions of years to break. In fact thanks to advances in factoring algorithm and computing power it took only until 1994. A team successfully factored the 129 digit semi prime reconstructed the private key and decoded the text to reveal the sentence. Here is their announcement.I have always found the basic proposition at the heart of RSA encryption hugely counter intuitive. How can it be that people much smarter than me using vast computer resources cannot find which two prime numbers I multiplied together when the product reaches 300 decimal digits? What would it take to answer the 129 digit challenge that seemed near impossible in 1977. How difficult is the problem with modern factoring algorithms, software and cheap cloud computing?

The team that claimed the prize for what became known as RSA-129 used the Bonic framework to distribute the computing amongst volunteers. Apparently 1600 machines were used over a period of eight months. The algorithm used was the Multiple Polynomial Quadratic Sieve. It was considered the best understood factoring algorithm at the time.

Cado-nfs is an implementation of the number field sieve it is made up of several individual programs that will perform each of the steps necessary. I chose this implementation for a couple of reasons: Firstly it was written fairly recentlyand is maintained so the build dependancies are easy to satisfy. Secondly the D in it's title stands for distributed as it has been built with distribution across several servers in mind.Coincidentally to this google offered a great free trial of 2 months and $300 of credit on their cloud platform. Which is a pretty good deal if you like playing around with CPU intensive stuff like me.In order to give cado-nfs a spin I started up a high CPU instance on google's cloud compute engine and set it off with RSA-100 a 100 digit semi-prime first factored in 1991. On a single instance it took 148 minutes. or nearly 2.5 hours. This is quite impressive given that it took days across many more machines to achieve the original result admittedly using a slower sieving algorithm. Unfortunately (or fortunately for security) factoring times do not scale linearly with the moduls length. Before tackling anything longer I decided to try out cado-nfs distributed capabilities. Doing this couldn't really be easier. A master process uses SSH to start and control threads on slave machines which it allocates work over an HTTP API. All this is handled automagically all that you have to do is make sure the slave machine has the master's ssh key authorised and then start the master process with the ip-address of the slave machines.Using this method I managed to get the time for RSA-100 down to 77 minutes across 2 machines and 47 minutes across 4 machines. I stopped at 4 machines or 8 CPUs as I hit the CPU limit for free instances on Googles platform. I later discovered that I could run 8 CPUs per region so with three regions I could use 24 CPUs in total on 12 virtual machines.With four machines I felt my set up was probably good enough to tackle some longer problems. I factored RSA-110 and RSA-120 experimenting with parameters as I went. Here are some times I found:

CPU Time (s)

Elapsed Time (s)

CPU Time (hrs)

Elapsed Time (mins)

Machines

RSA100

15466.6

8883.19

4.296277778

148.0531667

1x

RSA100

15213.6

4641.16

4.226

77.35266667

2 x

RSA110

55199.6

16018.8

15.33322222

266.98

2x

RSA100

15390.4

3510.53

4.275111111

58.50883333

3 x

RSA100

15344.8

2829.9

4.262444444

47.165

4 x

RSA120

184096

30134.1

51.13777778

502.235

4 x

The number field sieve algorithm has several steps. It is sieving of polynomial relations that can be paralysed amongst several machines. Following this there are several further steps which need to be performed on a single machine. These steps can be memory intensive and as the numbers got bigger took an increasing amount of time. It is for this reason that the time for factoring numbers is not directly proportional to the number of machines it is run across.With a bit of confidence in the setup from RSA110 and RSA120 I decided to tackle the 1977 problem.Here is the public key modulus as it appears in that article.

After 22 hours I checked in on it and there was both good news and bad news. The good news was that the sieving had finished! The bad news was that there had been a memory error during the next stage. It seemed quite likely that the machine had simply run out of memory as it had only 1.8GB. So I backed up the working directory of cado-nfs from /tmp and took a snapshot. I then used the snapshot to create a memory optimised machine which seemed to have similar CPUs but also 13GB of memory.Cado-nfs makes restarting a calculation simply a matter of rerunning the original command. I dug this out of the logfile and ran it. The calculation then picked up from where it had crashed and seemed happier with more memory. After a few hours I had the answer and yes I can confirm wikipedia was correct. I'd racked up about $30 worth of usage but as I said it was in fact free due to the trial.

Once we have the factors decoding the message is easy RSA maths with the exponent being given in the article. The character encoding scheme used was also given and was a simple substitution with 0 = space and 1 = A, 2 = B etc.

I stole the modular inverse function from http://rosettacode.org/wiki/Modular_inverse#Python . Thanks to Google for the free computing. I found the cloud compute web interface to be pretty self explanatory having used AWS for a while. There are some nice features such as the usage graphs (to check always using 100% of CPUs) and a little activities pop-up which states what actions you've just taken. The in browser ssh client even seemed usable.

Thursday, February 5, 2015

A while ago a lot of people visited my site (~ 90,000 ) with a post about how easy it is to make two images with same MD5 by using a chosen prefix collision. I used Marc Steven's HashClash on AWS and estimated the the cost of around $0.65 per collision.

Given the level of interest I expected to see cool MD5 collisions popping up all over the place. Possibly it was enough for most people to know it can be done quite easily and cheaply but also I may have missed out enough details in my original post.

In this further post I’ve made an AWS image available and created a step-by-step guide so that you too can create MD5 chosen prefix collisions and amuse your friends (disclaimer: they not be that amused). All you need to do is create an AWS instance and run a few commands from the command line. There is a explanation of how the chosen prefix collision works in Marc Steven's Masters thesis.

Here are the steps to create a collision.

1) Log on to AWS console and create a spot request for an instance based on my public Amazon Machine Image (AMI). Spot requests are much cheaper than creating instances directly, typically $0.065 an hour. They can be destroyed, losing your data, if the price spikes but for fun projects they are the way to go.

I have created a public AMI called hash-clash-demo. It has the id ami-dc93d3b4 and is in the US East (North Virginia) region. It has all the software necessary to create a collision pre-built. Search for it with ami-dc93d3b4 in community AMIs and then choose a GPU2 instance. I promise it does not mine bitcoins in the background although thinking about it this would be a good scam and I may introduce this functionality.

2) Once your request has been created and evaluated hopefully you will have a running instance to connect to via SSH. You may need to create a new key pair, follow the instructions on AWS to do this and install on your local machine. Once you have your key installed log onto instance via ssh as ec2-user.

3) The shell script for running hash clash is located at /home/ec2-user/hashclash/src/scripts . Change into that directory and download some data to create a collision. Here I download a couple of jpeg images from tumblr.

4) It is best to run the shell script in a screen session so you can detach from it and do other stuff. Start a screen session by typing

screen

Once you are in the screen session kick off the cpc.sh shell script with your two files. Send the outputs to a log file in this case I called it demo.output.

Detach from the screen session with Ctrl A + D

5) Tailing the log file you should be ale to see the birthday attack to get the hash differences into the correct locations starting.

tail -f demo.output

6) Leave the birthday search to do it's thing for an hour or so. Hopefully when you come back the attack should have moved on to the next stage, creating the near collision blocks to gradually reduce the hash differences. The best way to check this is to look at files created. The workdir0 contains all the data for the current collision search for the first near collision block. More of these will be created as more near collision blocks are created.

7) Go away again, a watched collision pretty much never happens. Check back in ~5 hours that it is still going on. Tailing demo.output and listing the directory should let you know roughly what stage the attack is at.

Here we are only at block number 2 of probably 9.

8) Come back again about 10-12 hours from start and with any luck we have a collision.

This one finished at 02:45 in the morning having been started at 10:30 the previous morning. You can tell when it finished as that was the last point the log was written to. If the log log file is still being updated the collision search is still going on. It took 9 near collision blocks to finally eliminate all the differences which is normal. 16 hours is a bit longer than average.The collisions have been created in files named plane.jpg.coll and ship.jpg.coll. You can verify they do indeed have the same md5 hash with md5sum.

Wednesday, January 7, 2015

Lately in an effort to code up and properly understand the Wang attack on the MD4 family of hash functions I've been reading a lot of papers on the subject. Many of the papers have very similar names and the same authors. I found myself having to create a quick reference about each paper and it's contents. Here they are with a brief summary of what I got from each:

Collision for Hash Functions MD4, MD5 HAVAL-128 and RIPEMD

Xiaoyun Wang, Dengguo Feng, Xuejia Lai and Hongbo Yuhttps://eprint.iacr.org/2004/199.pdfThis is the original paper listing out some collisions for each of these functions. This must have been quite a blockbuster at the time.