By "sources" do you mean the .adf disk images? The zip file from cgexpo.com just contains a bunch of source files and executables. I've tried getting them into UAE but haven't been successful. I'm not sure how to get files from my linux box into an Amiga .adf disk image without an actual Amiga computer. Plus, even if I was able to get the files into Amiga .adf disk images, I have no idea which files go onto which disk image.

If you've got the disk images already, I'd love to get them. If not, I'll look up Harry and see if he can help me put together some disk images.

Wookie

I can help a bit. There are 4 disks. The disks contain the directories named:- boot- rsa1- rsa2- rsa3

This is technology archeology at its best. Figuring this out and finally getting the private key, or the algorithm for generating the private key into portable C code is essential for preserving the Lynx as a homebrew platform. Thanks for you help so far.

This is technology archeology at its best. Figuring this out and finally getting the private key, or the algorithm for generating the private key into portable C code is essential for preserving the Lynx as a homebrew platform. Thanks for you help so far.

You are welcome. Curts encryption code will not work if compiled on a Litte Endian machine. It has to be something like Motorola Amiga or Power PC.

I am half-way in my process to create an endian independent version of the code. But so far it does not work.

Inside the Lynx we have a special case. The exponent that we use for decryption is a fixed constant 3 instead of a 51 byte key. This is of course much faster than the generic version. But because of this running the encryption process using the Lynx code is impossible.

What I would like to do is to find a person with a Power PC based linux system and run the code attached here on the PowerPC.

To compile this typegcc enctest.c

To run this type./a.out

The output should be something like

LynxDecrypt failsDecrypt works

The LynxDecrypt part is little endian onlyThe Decrypt part is big endian only

I don't think the BIT #define is correct for big endian. When you compare Intel machines to 68000 machines only the byte order is different and not the bit order within each byte.

For arithmetic involving arrays of bytes accessed as individual bytes the resultant number generated by a algorithm using that array must be stored/accessed with a known endianess. You don't need to change the the end of the array that you start processing from.

I don't think the BIT #define is correct for big endian. When you compare Intel machines to 68000 machines only the byte order is different and not the bit order within each byte.

For arithmetic involving arrays of bytes accessed as individual bytes the resultant number generated by a algorithm using that array must be stored/accessed with a known endianess. You don't need to change the the end of the array that you start processing from.

True. But if you look at the for-loop it will treat the data as bits. First it tries to access the most significent bit of the most significent byte. When you flip the byte order you still want to access the same bit. But now the most significent byte is the last byte. The bit you want is not the least significent but the most significent so you need to index it by 7-bit.

But I may be wrong.

I just cannot make this work with Curts algorithm. The algo ripped from the Lynx works ok.

Perhaps we should just get the standard RSA algorithm and recompile it for 408 bit keys.--Karri

What compiler was the code originally passed through to get the final binary (back when the Lynx was still being developed for)? What size in bits should an "unsigned int" be in this code? Is it 16 bits or 32? If it is sixteen bits change all "unsigned int" to "unsigned short int" for modern compilers.

I'm not in the slightest bit confident about this, but assuming the big numbers posted are big endian, I wrote the following quick program on top of the arbitrary-precision arithmetic support in OpenSSL (whichever version came with Mac OS X v10.6):

I've never used OpenSSL before and couldn't find a definitive reference on the intended usage patterns for BIGNUMs, hence the slightly awkward means of byte stuffing and retrieval — though if you insert something like a print_number(keyFile1) then you get exactly the same byte pattern back as you put in, so I'm confident they're correct.

Obviously this will be completely incorrect if the 51 byte numbers given are intended to be little endian, in which case you'll need to modify load_number and print_number. Or just reverse the order of the numbers in the arrays then reverse the order of the numbers in the output.

I'm not in the slightest bit confident about this, but assuming the big numbers posted are big endian, I wrote the following quick program on top of the arbitrary-precision arithmetic support in OpenSSL (whichever version came with Mac OS X v10.6):

Hope I've understood!

You understood it perfectly. This was exactly what I hoped to do some day.

The keys are definitely in big endian as it is the standard way of representing them. I just have to try out the key you found. Perhaps it works for encrypting stuff.

I also found out that the key produced from the keyfiles don't work backwards with the exponent 3. So back to the drawing board...

Not entirely. We do have a working LynxDecrypt implementation in the enctest. I'm assuming, by the terrible C code, that it was reverse engineered from the Lynx firmware code. I haven't tried too hard to clean it up and figure out exactly why it is working. If we can better understand that algorithm, then we'll be heading in the right direction.

I also found out that the key produced from the keyfiles don't work backwards with the exponent 3. So back to the drawing board...

Not entirely. We do have a working LynxDecrypt implementation in the enctest. I'm assuming, by the terrible C code, that it was reverse engineered from the Lynx firmware code. I haven't tried too hard to clean it up and figure out exactly why it is working. If we can better understand that algorithm, then we'll be heading in the right direction.

True. The basic stuff was created by Harry Dodgson. Cleanup is by me.

What happens here is that the Lynx uses Montgomery multiplication for solving the equation.

There are two parts that should be understood.

1) The LynxMont routine is not what it says in the comment. It does a lot more - why.

2) The LynxMont is first run on the data alone. Then the result is run through LynxMont again.

Out of these two topics it should be possible to understand what the exponent really is. It may be something else than 3.

Or could the two runs through LynxMont just mean that the algorithm is run twice on the data.

Or could the two runs through LynxMont just mean that the algorithm is run twice on the data.

That could be how Montgomery multiplication works, but I think your guess might be right. I also noticed that it appears to start on byte 1, not byte 0 the first time through the algorithm. I'll see if I can take a closer look at it this weekend. We need a clean, function based (no globals) implementation of that code before we can get anywhere with this. So far we've seen one failed modular exponentiation using the OpenSSL library. My own bignum library came up with the same failed result independently and I wasn't using any tricks, just straight forward long hand algorithms.

I've been slowly picking at the enctest.c file, trying to clean it up, understand it and reorganize it so that I can try running other data through it. One of the things I noticed was that the decryption process doesn't really need lots of code. Here's the call tree that is generated by cflow:

I know why the convert_it algorithm starts at byte 1 instead of 0. Montgomery multiplication works backwards in a sense and thus can carry over into the first byte. I'm slowly wrapping my head around this. It looks like that once i get this cleaned up, I'll be able to run the private keys through this to decrypt the encrypted private key and then use the private key to encrypt Harry's plaintext loader. I think I'm getting pretty close.

OK, I think I've made some major progress forward. I've been slowly cleaning up the enctest.c file, figuring out what it is doing in each step and renaming variables and restructuring the code. I've refactored all of the global variables so that they are now parameters to the functions. I've trimmed all of the cruft and I've wrapped my head around the whole algorithm.

The reason that using bignum libraries to do decryption hasn't worked is that there is primitive framing in the encrypted data. If you take the value of the very first byte and subtract it from 256, you get the number of encrypted blocks to process. For instance, the first byte in Harry's encrypted loader is 0xFD (253). If you subtract that from 256 - 253 = 3 blocks in the "frame" to process. The blocks start immediately following the count byte and are 51 bytes long, the same size as the keys. Once you process the next 3 * 51 bytes of data, there is another block count byte. In Harry's encrypted loader, that byte is 0xFB (251) meaning that there is another 5 blocks of encrypted data in that frame.

So if you want to make decryption work using another bignum library, you'll have to read the first byte, calculate how many encrypted 51 byte blocks there are to process and then for each 51 byte block after that, multiply it by itself three times (raise it to the third power) and then mod it by the Lynx public key and you should get the result you're looking for.

What is most interesting about this is we now know the following:

1. How to decrypt the private encryption key for encrypting data for the lynx.
2. How to use the decrypted private key to encrypt data for the lynx.

I'm currently restructuring the enctest.c code so that we can pass any lynx-encrypted data streams through it. That will let me try decrypting the encrypted private key used for encrypting data for the lynx.

I'm getting so close to having generic Lynx encrypt/decrypt tools written in C that I can taste it.

I've been slowly picking at the enctest.c file, trying to clean it up, understand it and reorganize it so that I can try running other data through it. One of the things I noticed was that the decryption process doesn't really need lots of code. Here's the call tree that is generated by cflow:

I know why the convert_it algorithm starts at byte 1 instead of 0. Montgomery multiplication works backwards in a sense and thus can carry over into the first byte. I'm slowly wrapping my head around this. It looks like that once i get this cleaned up, I'll be able to run the private keys through this to decrypt the encrypted private key and then use the private key to encrypt Harry's plaintext loader. I think I'm getting pretty close.

You cannot use this algorithm for encryption. The exponent is hard-coded to some value here. The question is if the hard coded exponent is 3 or something else.

The interesting thing is to get the same decrypted result using a generic RSA algorithm. Because we need the generic RSA algorithm for encryption anyway.

So there is one other interesting observation I just made. Harry's plaintext loader has 410 bytes in it but the result data block is declared to be 600 bytes long in enctest.c. I noticed that there are two distinct encrypted blocks of data in the encrypted loader. What tipped me off was the block count framing that I described above. The lynx decryption algorithm decodes the first frame consisting of three encrypted 51 byte blocks into the base of the loader memory; in our case index 0 of the result data block. The second frame is decoded into memory starting at byte 256 after the base of the loader memory.

So what struck me as odd is that the first frame is only 3 blocks of 51 bytes of encrypted data but the second frame is 5 blocks of 51 bytes of encrypted data. Let me show you why that is odd. If you do the math for the encrypted loader data it makes sense:

But the math for the plaintext loader doesn't make sense. For each 51 byte block, we get 50 bytes of decrypted data. What is odd is that the second frame of 5 blocks is decrypted to the 256th byte of the result memory instead of the 151st byte. If the two frames were meant to be decrypted contiguously into memory then the second frame would be decrypted to the 151st byte since 3 * 50 bytes = 150. Since it is decrypted to the 256th byte, that creates a gap of 106 bytes.

What that means to me is that the plaintext loader isn't 410 bytes long, it is actually 506 bytes long. We know that we're going to get 400 bytes of decrypted data (8 * 50 = 400). But we also know that there is a gap of 106 bytes in between the two blocks of decrypted data, so the whole decrypted loader is really 400 + 106 = 506 bytes.

Armed with that knowledge, I dumped out the entire decrypted result and here is the true plaintext version of Harry's encrypted loader:

So this explains that we have to be smarter about how we encrypt things. We can't just encrypt one big block and have the Lynx decrypt it into memory. If the enctest.c file is a fully reverse engineered version of the Lynx ROM, then we know that the Lynx expects there to be two frames of encrypted data. One frame to be decrypted to the base of loader memory and the second frame to be decoded into memory starting at the 256th byte of loader memory.

Does anybody know why there is a gap in the decoded version? Do we know if the Lynx only decrypts two frames? Do we know if the remaining part of Harry's plaintext loader (bytes 411-506) are actually important? I've made lots of progress but that has created lots of new questions.