Fractal image compression

Was promoted in the early 1990's as the future of digital image compression. The basic idea was, given an image, to find the particular fractal or combination of fractals that would generate the image after a few iterations of the fractal. The way to do this is to find bits of the image that were self-similar, but at different scales. For example, if you had a picture of clouds, then the edge of the cloud looks much the same if you look at the cloud or if you look at a tiny piece of it. So you say something like "this little bit of cloud has the same shape as the big one, only smaller", and you only have to store a few little pieces of information about how to generate the big cloud and that was it. This would give a huge compression ratio, since you wouldn't have to store millions of pixel values, rather just a few equations, which, when run iteratively on itself a few thousand times, would generate the original image. If you wanted a rough image, you would only iterate a few times.

The other purported advantage was that you could decompress the image at a higher resolution than you compressed the image at, the "fractal nature" of the encoding would interpolate in a more natural manner than simply enlarging the image.

The problem is that it hasn't come to fruition. The main problems were that:

There is no good algorithm for working out the fractal for an image. In fact, the best algorithm so far developed is the graduate student algorithm (this is serious, that's what the algorithm is called). It was developed by Michael Barnsley. It works like this.

Get the image you want to compress.

Put a graduate student in fractal mathematics with a computer and said image in a room.

For what it's worth, my 3D engine uses wavelet compression for everything, and even with a very simple pseudo-Haar transform, some of my textures get as much as 150:1 compression, and meshes often get down to 3 bytes per triangle (that's including texture coordinates and surface normals) - for example, one actual mesh (a 6962-triangle torus) compresses down to a 21370-byte file.

Admittedly, the insanely-high image compression ratio is on textures with very regular patterns - "real" images tend to get 2:1, and unfortunately, random data actually expands in size, which is quite typical of standard wavelet schemes. Fortunately, my engine doesn't require the use of wavelet-transformed CODECs; it has another CODEC which is basically a simplified PNG, and also many hooks for adding other CODECs as they're implemented.

Now, if I had a fractal wavelet transform, I'm sure I could convey even more information in less space. Damn Michael Barnsley though - one of the patents he holds is over partitioning an image up into pieces for fractal analysis, which is absolutely mandatory for fractal-wavelet image compression. Butthole.

This technique works very well for textures - ie repeated patterns used to cover 3d objects. The demoscene uses them extensively to get amazing 3d demos to fit in 64k; since no actual 'textures' are stored, only equations to generate those textures, the amount of data required to display them is drastically reduced.

This system has the pleasant side effect of meaning that since the entire thing, assuming we're still talking about demos, is generated using equations only, there is in fact no upper resolution limit - with a sufficiently powerful computer and a good demo, the demo could be displayed at very high resolution with no pixelation of textures or blockiness of models, though after a certain point the models would start needing more data to describe them in order to look decent (curves would begin to show their component lines; smoothing only works so far) though if you used mathematically generated curves in the first place you could get to extremely high resolution without needing to add data.