Previous | Table of Contents | Next |

Fractal Image Compression

DCT-based JPEG compression is quite effective at low or moderate compression ratios, up to ratios of 20 or 25 to 1. Beyond this point, the image becomes very “blocky” as the compression increases and the image quality becomes too poor for practical use. JPEG obtains high compression ratios by cutting off the high frequency components of the image. This can also introduce very visible artifacts, in particular for sharp edges in the image. This is known as Gibb’s phenomenon. The practical limit on the compression ratio in turns implies a limit on the number of images that can fit on a hard disk or a CD-ROM. As such, the storage space requirements of graphics applications continue to increase at a very fast rate.

Another drawback of JPEG compression is its resolution dependence. In order to “zoom-in” on a portion of an image and to enlarge it, it is necessary to replicate pixels. The enlarged image will exhibit a certain level of “blockiness” which soon becomes unacceptable as the expansion factor increases. Because of this problem, it is sometimes necessary to store the same image at different resolutions, thus wasting storage space.

So, although JPEG is now a well-established standard for lossy image compression, it has its limits and alternative compression methods must be considered. Wavelet-based methods are gaining popularity. They are similar in spirit to the DCT methods but do not suffer from some of its shortcomings. Methods based on vector quantization (VQ) are also very promising. But, in this chapter we will look in more detail at yet another technique: fractal image compression.

The term *fractal* was first used by Benoit Mandelbrot to designate objects that are self-similar at different scales. Such objects have details at every scale. A well-known example is the Mandelbrot set, which is described by a very simple equation yet exhibits an infinite variety of detail. This can be viewed as an extreme form of compression: the equation itself can be described with a few bits of information or implemented in a very short program, but the resulting image would need an infinite amount of bits to be represented as a set of pixels. The popular FractInt program can generate very delicate pictures from one-line formulas.

Mandelbrot did not actually consider fractals for compression, but he showed that they could be used for modeling real objects such as clouds, trees or mountains. Such objects reveal more detail whenever you look closer at them. Mandelbrot’s book *The Fractal Geometry of Nature* first published in 1977 attracted a lot of attention and the word *fractal* became very popular. The images generated by fractal modeling were very realistic looking and these techniques are now commonly used in many applications using computer-generated images.

Michael Barnsley and his coworkers at the Georgia Institute of Technology were the first to recognize the potential interest of fractal methods for image compression. Barnsley developed the theory of Iterated Function Systems (IFS) first introduced by J. Hutchinson in 1981. After the publication of Barnsley’s book *Fractals Everywhere* in 1988, and his paper in the January 1988 issue of *BYTE* magazine, fractal compression became a very fashionable subject. The interest in this technique was aroused by the fantastic compression ratios claimed by Barnsley, up to 10,000 to 1. Together with Alan Sloan, Barnsley founded Iterated Systems, Inc. and obtained US patent 4,941,193 on image compression using IFS.

Unfortunately, the fantastic compression ratios could be obtained only on specially constructed images, and only with considerable help from a person guiding the compression process. This process is also known as the “graduate student algorithm,” consisting of giving a graduate student an office and a graphics workstation, locking the door, waiting until the student has found a good IFS for the image, and opening the door. It was impossible to completely automate the compression process, even with a supercomputer. Thus, IFS-based compression turned out to be impractical.

A breakthrough was made in 1988 by Arnaud Jacquin, one of Barnsley’s Ph.D. students. Instead of trying to find an IFS for a complete image, Jacquin had the idea of partitioning the image into non-overlapping ranges, and finding a local IFS for each range. This transformed the problem into a manageable task, which could be automated. For his doctoral thesis, Jacquin developed the theory of Partitioned Iterated Function Systems (PIFS) and implemented a version of his algorithm in software. Yuval Fisher, Roger Boss, and Bill Jacobs were also among the first to make public contributions to the theory of PIFS.

In 1991, Barnsley and Sloan obtained US patent 5,065,447 on this technique. Their company, Iterated Systems, sells software and hardware products using it, but they have not made public the details of their technology. In particular, the FIF (Fractal Image Format) used by the Iterated Systems products has not been publicly described. Possibly because of this, and of the patents attached to the method, fractal image compression is not yet used in practice as much as other techniques. However fractal compression is still a subject of active research, and it has already demonstrated its superiority at least for applications where very high compression ratios are required.

PIFS are also named Local Iterated Function Systems (LIFS), and Barnsley uses the term “Fractal Transform.” They all refer to the same technique. Barnsley’s terminology may be misleading because the Fractal Transform is not a transform in the same sense as a Fourier Transform or a Discrete Cosine Transform, so we will use the PIFS terminology in the rest of this chapter. All practical implementations of fractal image compression, including the program given later in this chapter, are based on PIFS. To understand better what a PIFS is, let us first come back to the technique which precedeed it, based on IFS.

Previous | Table of Contents | Next |