Source Code Accompanies This Article. Download It Now.

Daniel is a software developer for an investment banking company in Germany. He can be contacted at danielst@gmx.com.

Visual cryptography is a graphical method of concealing information. The information you want to hide can contain graphics, hand- or machine-written text, spreadsheet calculations, and the like. A visual cryptography scheme is based on the fact that each pixel of an image is divided into a certain number m of subpixels. The number m is called the "pixel expansion" of the scheme (because there is no technical way to divide a pixel, you go the other way and expand any pixel to a matrix of m pixels). The basic model consists of several transparency sheets. On each transparency, a ciphertext is printed, which is indistinguishable from random noise. The hidden message is reconstructed by stacking a certain set of transparencies and viewing them. The system can be used by anyone without any cryptography knowledge and without performing any cryptographic computations. For more information on visual cryptography, see "Visual Cryptography & Threshold Schemes" by Doug Stinson (DDJ, April 1998).

There are many schemes for implementing visual cryptography. Moni Naor and Adi Shamir developed the Visual Secret Sharing Scheme (VSSS) (http://citeseer .ist.psu.edu/naor95visual.html). As a further generalization of a visual cryptography scheme, the very existence of a secret image can be concealed by displaying a different image on each transparency. Naor and Shamir solved this problem in the case of binary (black and white) images for a (2,2) threshold scheme. The problem was also considered for a general access structure.

Stefan Droste offered a higher generalization: By stacking the transparency of each participant in the scheme together, a secret image is recovered and there is only this one way to recover it (http://ls2-www.cs.uni-dortmund.de/~droste/). However, the participants of any arbitrary subset of the entire set of participants share a secret, too. Hence, you actually have a multitude of more or less secret images.

In this article, I use the framework described by Nakajima and Yamaguchi to develop a Python-based application that embodies the proposed model. I call the application "PyEvcs." While my dithering and image-processing techniques are original, the idea of using Python to implement a visual cryptography scheme is not new. Frank Stajano's Visual Cryptography Kit (VCK) (http://www-lce.eng.cam.ac.uk/~fms27/vck/) and Thomas M. Thomson's Visual Cryptography Project (http://citeseer .ist.psu.edu/thompson00rit.html) are both implemented in Python.

The Model

Figure 1 illustrates the model I implement. PyEvcs uses three images as input data: the secret image (information) I want to conceal (The Cameraman), and two images representing information to be shared, Dorian and Lena. This model mainly consists of two phaseshalftoning and encryption.

First, you need to transform the gray-level images into simple black-and-white images in such a way that they are still meaningful; that is, the obtained black-and-white replications (called "Intermediate Images") mimic the aspects of the continuous tone ones. PyEvcs uses an ordered dither algorithm, similar to the dithering techniques used for newspapers. To illustrate, I have defined two different dithering masks, but any viable mask can be easily inserted into the application and can be tested.

But simply dithering the three images is not enough to obtain the proposed GEVCS. You also need to further process (encrypt) the images. This process is the most interesting part of the model.

Again, the simplest visual cryptography scheme is a secret image "split" into two shared images. The shared images are printed onto separate transparencies and handed to the two participants in the scheme. To decrypt, the participants simply stack their transparencies and are able to visually recognize the recomposed secret message. All three images involved in the scheme need to have the same dimensions. Furthermore, you must have corresponding pixelstwo pixels in separate images with the same coordinates in the horizontal plane. In other words, if you stack the two images, a pair of corresponding pixels will perfectly superimpose.

During the dithering process at the pixel level, any pixel in the originalcontinuous toneimage is expanded to a matrix of black and white subpixels, with the number of black or white subpixels determined by the gray level of the original pixel. You denote the number of white subpixels in a pixel's expanded (halftoned) version by pixel transparency.

In Figure 2, three different recomposition cases can be analyzed. The encryption process applies pixel-by-pixel to the three halftoned images, controlling the transparencies of the shared pixels such that the required transparency of the target pixel is obtained. The matrix obtained by expanding a pixel is similar to a binary matrix where the "1" elements represent black subpixels and the "0" elements represent white subpixels. The operations at the pixel level are binary operations in the OR semigroup.

In the Figure 2, T1, T2, and Tt are the pixel transparencies in the share1, share2, and target images, respectively. In Figures 2(a) and 2(b), reconstruction is possible. It is merely a question of arranging the black and white subpixels inside the given matrix. The problem is solved by the encryption module of the application, which builds proper matrix collections for every required gray level.

Why collections? Because the security of the scheme is also important. Therefore, at any encryption step, a matrix is randomly selected from the proper collection. However, in Figure 2(c), you can no longer obtain the required transparency for the corresponding pixel in the target image, no matter how you rearrange the subpixels inside the matrices. So how can we solve such a possible situation?

Assume you define a 3D space having the transparencies of the pixels in the three images involved in our problem as axesthe x-axis represents the transparencies of the pixels in share1, the y-axis represents the transparencies of the pixels in share2, and correspondingly, the z-axis represents the transparencies of the pixels in the target image. Any point in the defined space is characterized by three values representing transparencies in the three mentioned images: p(T1, T2, Tt). You first determine the volume containing the points for which the reconstruction is possible, as in Figures 2(a) and 2(b). Afterwards, you analyze every point outside this volume, as in Figure 2(c).

For instance, say you have a point (outside the "possible zone") p'(T'1, T'2, T't) and you still want to be able to encrypt that pixel. However, you don't want the initial image to deteriorate. Consequently, you have to accept a compromisethe application determines the closest point to p' situated in the "possible area," say p"(T"1, T"2, T"t), and will replace p' with p". That is, the transparencies of the corresponding points in share1, share2, and target become T"1, T"2, and T"t, respectively. Nevertheless, you should check to ensure that the new transparencies are not disturbing your images so much that the original image will no longer be recognizable. The security of the scheme is another matter. While modifying the shared images, you are not allowed to "betray" the secret image. Thus, the problem slowly becomes more and more complex. Luckily, Python makes handling the problem much easier.

The Implementation

PyEvcs uses the core functionality of Python, the Python Imaging Library (PIL), and the "numarray" packages. For some GUI functionality, I also used Tcl/Tk.

There are a variety of dithering techniques that can be implemented. I chose an ordered dither algorithm and designed two representative dithering masks declared as global variables; see Listing One (any other dithering mask can be inserted into the code). The dithering function takes a corresponding dithering mask as parameter and returns the determined binary matrix. The dithering function also calculates the transparency (number of white subpixels) for every pixel in the gray-level (original) image, and maps the coordinates of the pixel with the corresponding transparency value; see Listing Two.

As for the encryption process, Listing Three determines the okzone containing the points for which reconstruction is possible without modifying the images; see Figures 2(a) and 2(b). Afterwards, I define a "distance" function. For the points outside the okzone, it is necessary to calculate the shortest distance between the point in question and the okzone; see Listing Four.

To properly encrypt the three dithered images, you first rearrange the subpixels in the obtained pixel matrices (see Figure 2). You then obtain a 2×m binary matrix in which the proper number of [1 1], [1 0], [0 1], and [0 0] columns can be calculated. If you find a point outside the okzone, you just determine the closest candidate in the okzone and correspondingly replace the transparency values; see Listing Five.

Now that the 2×m matrix is known, you need only construct the proper square matrix used to generate the matrix collection; see Listing Six. The matrix collection is generated by randomly permuting the columns of the base matrix; see Listing Seven.

Now you are ready to apply the algorithm for obtaining the two shared images of the Gray-level Visual Cryptography Scheme. All you need to do is to put everything together in the proper order; see Listing Eight. At the end of this function, you use the psprint procedure in conjunction with the v1 and v2 views. Again, in visual cryptography, the decryption is processed directly by the human user's vision. The secret image is revealed by properly stacking the two shared images. PyEvcs can generate all the required images (also a couple of intermediate images for testing purpose) in PostScript format. To test the results quickly, PyEvcs also simulates the superimposition of the two shared images. In such cases, the function is not very complicated; see Listing Nine.

Conclusion

Extended visual cryptography schemes let you construct visual secret sharing schemes in which the shared images are meaningful. So again, why Python? Image processing means intensive calculations, reading/writing image files in different formats, Boolean transformations at the pixel level, and many other operations. I have more than 10 years of experience in developing complex C++ and Java applications, and I've also developed diverse programs using Perl, Tcl, Visual Basic, and shell scripting. All programming languages and development environments have their strengths and weaknesses, but until Python, no language has provided such intuitiveness and effectiveness. Visual cryptography represents a technique for information concealment with major advantagesit is very low-tech because all that you need to use it is a couple of transparencies and a printer. Additionally, the decryption process is immediate and does not require any cryptographic knowledge from users.

To get the secret information, participants of the scheme need only properly stack the transparencies. This presents a high level of security (some schemes are absolutely secure), proven in the theoretical studies.

That said, visual cryptography schemes suffer mainly because of pixel expansion (any pixel in the original image is expanded during the encryption process to an m×m matrix of pixels) and loss of contrast (you have seen that the contrast of both shared images and the target image can suffer deterioration). Combining the theoretical research in visual cryptography with more developed testing applications in Python makes a major contribution toward finding some solid practical applications for visual cryptography.

def encrypt(self, mask = _boxMask):
""" Perform the actual dithering algorithm. For any pixel in the
original image the function fills a square with black and white
pixels in resulting image. If the grey level of the pixel in
original image is greater than the one specified in the mask, the
corresponding subpixel in resulting image will be black.
Otherwise, the subpixel in the resulting image remains white.
Precondition: the image must have a specified size."""
maxX = self.__xmax
maxY = self.__ymax
length = len(mask)
result = bitmap((length*maxX, length*maxY))
for x in range(maxX):
for y in range(maxY):
level = self.__img.getpixel((x,y))
transparency = 0
for i in range(length):
for j in range(length):
if level >= mask[i][j]:
#print a black pixel
result.set(i + x*length,j + y*length,0)
else:
#print a white pixel
result.set(i + x*length,j + y*length)
transparency = transparency + 1
self.__data[(x, y)] = transparency
return result

Dr. Dobb's encourages readers to engage in spirited, healthy debate, including taking us to task.
However, Dr. Dobb's moderates all comments posted to our site, and reserves the right to modify or remove any content that it determines to be derogatory, offensive, inflammatory, vulgar, irrelevant/off-topic, racist or obvious marketing or spam. Dr. Dobb's further reserves the right to disable the profile of any commenter participating in said activities.

Video

This month's Dr. Dobb's Journal

This month,
Dr. Dobb's Journal is devoted to mobile programming. We introduce you to Apple's new Swift programming language, discuss the perils of being the third-most-popular mobile platform, revisit SQLite on Android
, and much more!