Previous | Table of Contents | Next |

The sample program used to demonstrate fractal compression in this chapter is found in the C source file FRAC.C. It must be compiled and linked with the standard support source files, BITIO.C, ERRHAND.C, and either MAIN-C.C for compression or MAIN-E.C for expansion.

The fractal compression program optionally takes additional parameters on the command line:

*****The quality value, ranging from 0 to 20. It is used as average tolerated error between the original image and its uncompressed version. Small values result in better quality images, large values result in better compression ratios. The default value has been arbitrarily chosen as 2.*****The domain density factor, ranging from 0 (fastest compression) to 2 (best but very slow compression). This parameter affects the size of the domain pool that will be searched.*****Horizontal and vertical images sizes (default 320 x 200). Both sizes must be multiples of 4 in this implementation.

The command syntax for the compression program is:

FRAC infile outfile [-q quality] [-d density] [-h h_size] [-v v_size]

The image dimensions and the domain density factor are encoded in the compressed file, so the expansion program doesn’t need these parameters on the command line. The syntax for expansion is:

FRAC-E infile outfile [-i iterations] [-s scale]

The optional parameters are:

*****The number of iterations, ranging from 1 to 15. The image quality does not improve much after 8 to 10 iterations. The default is 8.*****The scale factor (decompressed size divided by original size). The default is 1.

A summarized version of the main compression module is shown below.

void CompressFile (FILE *input, BIT_FILE *output, int argc, char *argv[]) { /* Allocate and initialize the image data and cumulative image data: */ compress_init(x_size, y_size, input); /* Initialize the domain size information as in the decompressor: */ dominfo_init(x_size, y_size, dom_density); /* Classify all domains: */ for (s = MIN_BITS; s <= MAX_BITS; s++) { classify_domains(x_size, y_size, s); } /* Output the header of the compressed file. */ OutputBits(frac_file, (uns_long)x_size, 16); OutputBits(frac_file, (uns_long)y_size, 16); OutputBits(frac_file, (uns_long)dom_density, 2); /* Compress the whole image recursively */ traverse_image(0, 0, x_size, y_size, compress_range); /* Free all dynamically allocated memory: */ compress_cleanup(y_size); }

We first allocate and initialize all the necessary data structures. Then we classify all the possible domains. This speeds up the compression process considerably, as we will see later. After this, we store the image dimensions and the domain density factor in the compressed output file, since these parameters are needed by the decompressor.

The bulk of the work is done in the call of the traverse_image() function, which partitions the image recursively, and compresses each partition. In the end we free all the dynamically allocated data structures.

The initialization function compress_init() first allocates and initializes arrays for the image and domain data. Since the domains are twice as large as the ranges, each pixel in the domain image is the sum of 4 pixels in the range image. We don’t average (divide by 4) to keep an integer format without losing precision.

To speed up the compression process, we initialize cumulative tables for the image and domain data. We often have to compute the sum of pixel values or squared pixel values in a range or domain. To avoid repeated computations, we maintain for each pixel the sum of all pixel values strictly above and to the left of the given pixel. Then we can easily obtain the sum of all pixel values within a square region by adding the cumulative values at the top left and bottom right corners, and subtracting the cumulative values at the top right and bottom left corners. This is done in the code by the region_sum() macro.

To reduce the memory requirements, some tables are maintained only for pixels of even coordinates. In particular, domains are restricted to have their top left pixel with even coordinates. To simplify some algorithms, ranges are also restricted to have a size which is a multiple of four, hence the width and height of the image must also be multiple of four in our simple implementation.

In order to get the best possible image quality, the compressor could compare a given range with all possible domains. However this would be extremely slow. For an image of 320x200 pixels with ranges 4 pixels wide and domains 8 pixels wide, there are 4000 ranges and 313x193 = 60409 domains, if the domains can start on any pixel boundary. Thus 241,636,000 domain-range comparisons would have to be made, each comparison involving a least-square regression analysis to find the optimal affine map.

Note:Our sample program does not attempt comparisons in all 8 possible orientations of the range relative to the domain, otherwise this would increase the compression time by yet another factor.

To avoid such a lengthy computation, we reduce the size of the domain pool using the domain density parameter. For a density value of zero, the domains start on a boundary multiple of their size, thus the domains do not overlap. When compression time is not an issue and we are looking for the best possible image quality, the domain step (distance between two consecutive domains) is divided by 2 for a density of 1, or by 4 for a density of 2. The function dominfo_init(), which is common to the compressor and the decompressor, takes this into account.

Previous | Table of Contents | Next |