I have been studying data compression for a while. For educational purposes, after a lot of reading, I managed to create a software that performs encryption and authentication using AES256-GCM. However, I have now reached a point after which I cannot find a possible answer and this is confusing me.

My software allows the user to choose the "hardness" of the encryption, a value ranging from 0% to 100% (a scale). Nonetheless, I have not managed exactly how I could mingle the logic of "hardness" with the logic of AES. Yes, I have a lot of ideas about playing with the key, the IV, the resulting data and so on, but a point I have learned on this area is that you should never reinventing the well when there is already studies on the same aspects and whose results had been checked and rechecked over and over and over again.

Therefore, what would be the best way to perform such work? I seek for theory, not necessarily coding.

$\begingroup$If this is password based encryption, then there is a natural security-performance tradeoff: The cost of the key-derivation-function can be increased or decreased. For example, bcrypt and PBKDF2 have a number of iterations which can be tuned.$\endgroup$
– CodesInChaosJun 18 '15 at 11:16

2

$\begingroup$I think you have a misconception here, probably because, as you said, you have been studying data compression for a while. In cryptography, there are almost no use cases where the user can choose the degree of security of an encryption function. Usually, you want the maximum level of security that is reasonable. There is also no benefit in having a fine-grained tuning of the security level, as opposed than in data compression. There are multiple examples out there of attacks that precisely exploit functionalities similar to this (e.g., the recent LogJam attack)$\endgroup$
– cygnusvJun 18 '15 at 11:23

$\begingroup$Well, if you assumed attacks on AES were feasible, and that's why you want triple AES: 1. Note that there is already a double keylength AES, AES 256 which is well-cryptanalysed so why would you not use that? 2. Both the 3AES and AES-256 are "vulnerable" (of course this is not nearly feasible today or in the next few decades) to a codebook attack of order $O(2^{128}).$$\endgroup$
– kodluJun 10 '16 at 0:18

This begs the question, why would you in any real-world circumstance wish to reduce the difficulty for an attacker to break your cryptosystem?

To answer your question practically, the only reasonable way I can think of to accomplish this is to simply reduce the entropy in the key. At 100%, all 128 bits of the key are used. At 50%, 64 bits of the key are used, and the rest set to some static value (e.g., 0). At 25%, 32 bits of the key are used, and so on.

That said, I want to reiterate that there is no reason I can possibly conceive of to support such a feature, outside of sheer curiosity.

Edit: Note that this will result in an exponential decrease in security, which maps well onto how cryptographers typically think of the security of a system. Modern ciphers like AES-128 so strong that a 100-fold reduction in difficulty to break still leaves brute force well outside the realm of conceivable human feasibility ($\frac{2^{128}}{10^{2}} \equiv 2^{121}$).

The Encryption is not likely Compression where one can choose Normal, High or Low or something from scale of 0-100 as you mentioned.

If you really want to do it, you can think of it in two ways

Reduce the keyspace, as already mentioned in an above answer

Reduce the Rounds.

But these both will lead to a situation where you may end up in a breakthrough (security is compromised). Because keyspace and number of rounds are set after carefull analysis to provide a required level of security margin below which you should never go.

Another case can be, you can offer the user different Algorithms from which they can select like AES, Twofish, Camellie, CAST-256, SMS4 etc