Abstract

The amplitude-encoding case of the double random phase encoding technique is examined by defining a cost
function as a metric to compare an attempted decryption against the corresponding original input image. For
the case when a cipher–text pair has been obtained and the correct decryption key is unknown, an iterative
attack technique can be employed to ascertain the key. During such an attack the noise in the output field for
an attempted decryption can be used as a measure of a possible decryption key’s correctness. For relatively
small systems, i.e., systems involving fewer than 55 pixels, the output decryption of every possible key can
be examined to evaluate the distribution of the keys in key space in relation to their relative performance when
carrying out decryption. However, in order to do this for large systems, checking every single key is currently
impractical. One metric used to quantify the correctness of a decryption key is the normalized root mean
squared (NRMS) error. The NRMS is a measure of the cumulative intensity difference between the input and
decrypted images. We identify a core term in the NRMS, which we refer to as the difference parameter, d.
Expressions for the expected value (or mean) and variance of d are derived in terms of the mean and variance
of the output field noise, which is shown to be circular Gaussian. These expressions assume a large sample set
(number of pixels and keys). We show that as we increase the number of samples used, the decryption error
obeys the statistically predicted characteristic values. Finally, we corroborate previously reported simulations
in the literature by using the statistically derived expressions