Actually, that isn't steganography, but ROT13, which
is simple substitution, by doing a rotation, or shift, of
13 places. In other words, you are simply placing the
latter 13 characters of the English alphabet under the
first 13, and substituting accordingly.

Cheers!

Last edited by JustinT on Mon Nov 24, 2003 8:52 pm; edited 1 time in total

It was not steganography, it was cryptography, in this case, a substituion cipher, more specifically, a "Caesar shift," whereas k=+13;

Look a few posts up.

To define this, you can easily differentiate between the two by
understanding that steganography is the art of concealing the
existance of data, while cryptography is the art of keeping the
actual data secure.

theroguechemist wrote:

I am so much more superior that all of you.

Hehe. I can dig the sense of humor. Welcome to the forums, by the way.

Last edited by JustinT on Fri Dec 10, 2004 11:37 am; edited 2 times in total

Just one question however... I've been sitting here for 30 mins thinking about the 'birthday paradox' and I can't find anything paradoxical about it. It makes perfect statistical sense and I can't make it conflict with itself.

Am I missing something obvious here or is the only paradoxical thing about the 'birthday paradox' its name, on the basis that it isn't one ?

Ah, now this means quite a bit coming from the M3DU54 man
himself. Gracias. :]

M3DU54 wrote:

Just one question however... I've been sitting here for 30 mins thinking about the 'birthday paradox' and I can't find anything paradoxical about it. It makes perfect statistical sense and I can't make it conflict with itself.

Am I missing something obvious here or is the only paradoxical thing about the 'birthday paradox' its name, on the basis that it isn't one ?

Well, it isn't a paradox in the sense that it contradicts itself, be it logically
or such. It is more so a paradox in that it contains a veracity of math in
which there is a contradiction of how one's natural cognitive reaction may
be to the problem. In other words, one may be apprehensive of the problem's
validity in terms of how sound the conclusion is, therefore making this a
"contrary to popular belief" case, rather than a "contradicting itself" case.

In regards to passphrase attacks. Doesn't making key generation intensive slow down these attacks? Hence providing more protection to passphrases.

For instance say I had a one character password, and for each character in a password there were 128 different choices (a-z,A-Z,0-9,special chars, etc). Now if key generation was based on hashing that passphrase once to generate a 128bit key, and say this took 1 second to complete (indulge me) that would mean it would only take a maximum of 128 seconds to find the passphrase, thus allowing you to decrypt any data based on it.

Now say instead of doing one hash, what if that was done multiple times, say 10 times. Considering no shortcuts of generating the 10 round hash, it would now take 1280 seconds to find this one character passphrase.

So effectively by increasing key generation times, you are adding n-bits security to your passphrase which has n-bits. Is this correct?

Also Justin T, I've heard people say TwoFish has slower key-setup times. What does this "key-setup" involve? Is it basically telling the twofish algorithm what sort of key you are giving it? In my thoughts it takes the same amount of time to generate a 128bit key + IV's for whatever cipher. Can you explain each of the 4 setups for Twofish.

In regards to passphrase attacks, allow me to provide an excerpt from one of Adam Berent's papers on the subject.

Quote:

Key Iteration Count

First, we never use the user provided pass-phrase as is. We
have to convert it to a usable key. The simplest way is to create
a hash of the key using a message digest algorithm such as MD5
or SHA-1. The iteration count is the amount of times we perform
this function to derive the final encryption key. In each step we
take the previous hash result and hash it again. This is done to
simply make it more time consuming for the attacker. An end user
might not notice a 10 second delay during decryption however it
could complicate things when you need to perform this 10 second
function 30,000 times in a row. Since the attacker cannot perform
the decryption until he has the actual key used to decrypt the file
he has to endure the delay for every single key in the dictionary.

Perhaps this is similar in concept, to your ponderance. For the entire article, please refer to this link.

As for the key setup of Twofish, it depends on certain figments. The performance trade-off is heavily influenced by key setup, for that matter. The time required to change keys is relative to that of setting one up. Keep that mind. You are also given the choice of different keying options, which will alter the performance greatly. There is quite a bit of architecture philosophy that these are mingled with, but I won't dive into that this time.

These keying options include:

- Compiled*
- Full
- Partial
- Minimal
- Zero

Compiled:

Compiled keying is something that you assembly pimps will enjoy, as this feature is only available to that language. It requires additional memory to house the compilation, but the execution rates are decent in speed. It deals with the embedding of subkey constants, thus saving time and allowing you to take advantage of the LEA opcode for certain operations. It is most efficient on Pentium Pro or MMX, in my opinion, as the benchmarks are both similar to one another, while on a Pentium, things aren't that impressive. *Although this is an option, it isn't actually defined as one of the four general options for keying.

The following four options all perform a precomputation of Ki, where i is equal to: 0,...,39. This is due to the requisite of 40-word key expansion. Note, these are basic overviews of each option, just so you can get an
idea as to what they are for.

Full:

This option does it all. 8x32-bit S-box table expansion, S-box lookup, computation of function g and MDS matrix multiplication. As such, there are four lookups with three XOR operations, with a computation of the function g. This is a nifty option because the speed of both encryption and decryption is independent of key size. This option consumes 4Kb of table space.

Partial:

This is basically used when you don't need to construct the entire key schedule, due to encrypting few blocks of data. Instead of computing an S-box to an 8x32 table, you compute 8x8 tables, using static MDS tables of 8x32 in size to perform the MDS multiplication. Encryption and decryption speeds are also independent of key size. This option consumes only 1Kb of table space.

Minimal:

As opposed to partial keying, you are precomputing few q-boxes into an S-box table. The rest is performed during encryption. This also consumes 1Kb of table space for the storage of partial S-box computations. Accordingly, every round performs the necessary computatations for rendering bytes of key from S.

Zero:

This is simple. There is no precomputation of any S-box, so no additional tables are required. Everything is performed on the fly. You trivially compute Ki and S. If no setup time is available, if you add the sum of both key setup and encryption times, you have the time it takes to encrypt one block.

Now, this is just touching the surface of key setup, as I said, the performance issues revolve around the particular architecture in which you are implementing in. As you can see, key setup can involve a variety of keying options, which can be used to do precomputation of S-boxes, the function g, MDS transforms, et cetera. You can choose to do no precomputation at all, although I would recommend doing so. There is also the compilation option, as aforementioned, which allows you embed subkey constants into particular copies of code, in assembly.

I hope this serves as a beneficial overview, but if in the event you want a more architecture-specific explanation, or just more detail, do let me know, as I'll be glad to point you to worthwhile sources of related information or go in-depth, myself.

All in all, it can be costly to change Twofish keys, so precomputation is recommended.

Last edited by JustinT on Mon Mar 21, 2005 7:26 pm; edited 3 times in total

Just one question however... I've been sitting here for 30 mins thinking about the 'birthday paradox' and I can't find anything paradoxical about it. It makes perfect statistical sense and I can't make it conflict with itself.

Am I missing something obvious here or is the only paradoxical thing about the 'birthday paradox' its name, on the basis that it isn't one ?

Well, it isn't a paradox in the sense that it contradicts itself, be it logically
or such. It is more so a paradox in that it contains a veracity of math in
which there is a contradiction of how one's natural cognitive reaction may
be to the problem. In other words, one may be apprehensive of the problem's
validity in terms of how sound the conclusion is, therefore making this a
"contrary to popular belief" case, rather than a "contradicting itself" case.

Here's hoping that made sense. *pours you a drink*

Cheers.

Ahhh I understand, it's simply counter-intuitive. paradoxical in the sense that it may at first seem wrong to some people, but is demonstrably correct.

So, in cryptoanalysis... when we say that collisions are 'more frequent than expected' do we simply mean more frequent than gut instinct would suggest (but still what probability actualy dictates) ? or are we talking about an increase in collisions in the ciphertext expressed against the expected norm for a truly random sequence (ie, a way of 'feeling out' the entropy, or rather the lack thereof)

Thanks for your response JustinT. Just in regards to the making key generation longer. Doesn't this add considerable security to passphrases? I don't know of many programs which do this (but I guess a lot of programs don't exactly advertise these facts). I can recall that AxCrypt (Freeware on sourceforge) does some things, the author claims it adds 2^13 more bits of protection to the passphrase.

That is something I don't understand, how did he work out how many more BITS of protection it added. I can see in bruteforcing that if generating a key based on a passphrase was 2^13 times longer than generating a key randomly and testing it.. then thats how he says it adds 2^13bits of protection. Maybe I am confused...

Maybe he is talking "theoritical" protection.. like if I had a 2 character password with each character representing 128 possible combinations, then I have 128^2 possible password combinations. Maybe you add 2^13 more combinations to the password combinations "theoritically" because the password still only has 128^2 combinations.. but since it takes a lot longer to generate a key from the password.. effectively it has 2^13 MORE combinations when comparing it to bruteforcing a key.

So in summation.. when bruteforcing, if it takes 2^13 times longer to generate a key from a password, compared to generating a random key, that means your 2 character password effectively has the same strength as a password with 2 characters(128^2) + 2^13 combinations.

Generally speaking, to consider collisions in this case would
be to correlate them to the defined birthday paradox, thus saying
they are "more frequent than expected." Collision methodology,
as a whole, doesn't stop with the birthday attack. It also includes
the meet-in-the-middle. To attempt answering you question,
yes, we are talking about the defiance of our gut instinct, broadly.

If you don't mind drowning in a bucket of math and theory, here is
a great paper which describes a birthday attack theory. The paper
itself discusses the construction of MACs from hash functions. It was
authored by two gurus in their own respects - Preneel and Oorschot.

I'm not certain as to what his logic may be, so I won't attempt to
make a conclusion, however, I will say that it is possible to increase
the effort in time that one must take to mount a successful attack.
With this is mind, the theoretical possibilities are left open to debate.

Keep in mind a few certain aspects of information theory, that apply
to choosing passphrases. English (assuming standard) contains about
1.3 bits per character. Bits of information, that is. If you are to use a
64-bit key, for example, that would compare to around 49 characters,
or 10 words. Also, it is agreed that if the passphrase is long enough,
then the rendered key will be "random." I like to use the 5:4 words to
key bytes ratio.

There is also a form of weakness known as key reduction that deals
basically with the low and high ordering of bytes, as well as the case
of characters (lower, upper). Poor methodology can render much
smaller key spaces than assumed by the key size. In one case, I can
recall reductions from 2^56 to 2^40.

These things you must keep in mind, as they will affect the effectiveness
of an attack - many times in the favor of the attacker. Timing delays
and good-passphrase-choosing etiquette are two ways in which you can
thwart these attacks. Timing delays and increases in effort can be
applied in practice, as well, so it's not just theoretical.

Pardon me, as I'm busy with work, so replies may be a bit short at the
moment.

What is so special of the 128 encryption if it is left by the wayside as the machines are compromised by browser hijackers, trojans, keyloggers, phishing, etc. It is equivalent to installing the best alarm on your front door while leaving the back door and the windows latched by a simple hasp.

Naturally, those are concerns. However, this is merely addressing the cryptographic layer of security; the various other issues are typically confined to other layers. While cryptography can't address lax policies for other layers, it can address policies that do the best we know how to do, given our state of mathematical and computational knowledge, to preserve confidentiality and integrity; this is, at least, what is "special" about achieving a conservative level of cryptographic security.

It's up to the cryptographer to get the cryptography right; if the cryptographic layer is rendered ineffective by another, misimplemented layer, then what we have isn't a cryptographic failure - it's an infrastructure fatality, as a whole. All that can be done is addressing each layer, respectively; this includes cryptography, which is only one layer among the many.

Cryptographic policies can assume no responsibility for irresponsibility elsewhere. There's no efficiency or effectiveness, in that. Thus, be conservative with cryptography; at least design it to address all that it can in a sufficiently secure manner. After all, out of all the layers of security, cryptography has fared decently well, all things considered, because of that.

On a side note, it's vital that we are also aware of the impact on cryptography that may be imposed by cryptovirology and kleptography; thus, it is equally important to address the other layers of security in our systematic infrastructures. Why? The same primitives we use to cryptographically defend a system can be used to cryptographically attack a system. There may come a time, rather soon, where we not only need to protect our systems with defensive cryptography, but protect our systems from offensive cryptography.

(I've been heavily researching this science, and will have some material together soon, demonstrating both attack and defense methodologies between a victim and adversary, using variations of Adam Young and Moti Yung's model for a cryptovirological information extortion attack.)

But, overall, you are correct. We build systems with security, as a whole, that is comprised of multiple layers; if one layer is instantiated securely, while another is instantiated insecurely, the system, as a whole, is often compromised. This is due to irresponsible implementing and lax policy-making. As long as humans are designing security, this will be an everlasting plague. In other words - that is where a change must begin.

In the same spirit of front-door-only security, the best anti-virus apps AND the 128 encryption is trashed when a sharp coder creates a simple exploit that steals script-based pastings (check the Security in your Internet Options )

I agree; such irresponsibility can negate any measure you take. While it's only fair to say that cryptography isn't to blame, in such a case, it's also wise to point out that a misimplementation of one layer of security has the potential of negating the effectiveness of another layer. This echoes the realization that cryptography is not the solution; it is merely part of a solution that demands proper implementation in every regard. Otherwise, you begin losing assurance fairly quickly.