|Previous||Table of Contents||Next|
Just like audio data (discussed in Chapter 10), graphical images have an advantage over conventional computer data files: They can be slightly modified during the compression/expansion cycle without affecting the perceived quality on the part of the user. Minor changes in the exact shade of a pixel here and there can easily go completely unnoticed if the modifications are done carefully. Since the graphical images on a computer are generally scanned from real-world sources, they usually represent an already imperfect representation of a photograph or some other printed media. A lossy compression program that doesnt change the basic nature of the image ought to be feasible.
Given that lossy compression for graphical images is possible, how is it implemented? Researchers initially tried some of the same techniques that worked on speech, such as differential coding and adaptive coding. While these techniques helped compress graphics, they did not do as well as hoped. One reason for this lies in the fundamental difference between audio and video data.
Audio data sampled using conventional formats tends to be very repetitive. Sounds, including speech, are made of sine waves that repeat for seconds at a time. Though the input stream at the DAC on a computer may consist of dozens of different frequencies added together, sine waves generally combine to produce repetitive waveforms.
The repetitive nature of audio data naturally lends itself to compression. Techniques such as linear predictive coding and adaptive differential pulse code modulation take advantage of this fact to compress audio streams anywhere from 50 to 95 percent.
When research began on compression of graphics, attempts were made to apply similar techniques to digitized images, with some success. Initially, researchers worked on the compression of streams of rasterized data, such as would be displayed on a television set.
When graphics data is rasterized, it is displayed as a sequential stream of pixels. One row at time is displayed on a screen, working from left to right, then top to bottom. Thus, a thin slice of the picture is painted as each row is completed, until the complete screen is filled. When digitized, pixels can range in size from a single bit to as many as twenty-four bits. Desktop graphics today frequently uses eight bits to define a single pixel.
Differential modulation depends on the notion that analog data tends to vary in smooth patterns, with radical jumps in the magnitude of a signal being the exception, not the rule. In audio data, this is true as long as the sampling rate of the signal is somewhat higher than its maximum frequency component.
Differential modulation of an audio signal takes advantage of this fact by encoding each sample as the difference from its predecessor. If audio samples are eight bits each, for example, a differential encoding system might encode the difference between samples in four bits, compressing the input data by 50 percent. The lossy part of the compression scheme arises from the fact that an exact difference cant always be encoded using the standard differential method. The signal may be rising faster than the encoding permits, or the encoding may be too coarse to accommodate a small difference. The lossy aspect of differential encoding can be managed well enough to produce a good signal.
Differential modulation has more of a problem when compressing graphical data. For one thing, pixels in a graphical image cant be reliably depended on to vary upward or downward in smooth increments. Sharp dividing lines between different components of an image are the rule. This means that a system that relies on differential encoding needs to accommodate both small and large differences between samples, limiting its effectiveness. Many images will feature long stretches of data where pixels have little or no difference between one another, and these will compress well; however, others will feature many abrupt changes, and these may not compress at all.
In general, differential encoding of graphical images doesnt seem to produce compression that is significantly greater than that of the best lossless algorithms. It certainly doesnt yield the order of magnitude of improvement in compression that is needed.
Adaptive coding (which is often used with differential coding) relies on predicting some information about upcoming pixels based on previously seen pixels. If the last ten pixels in a grey-scale photograph all had values between forty-five and fifty, for example, an adaptive compression system might predict with high probability that the next pixel would be in the same range. An entropy-based encoding scheme, such as Huffman or arithmetic coding, could then assign probabilities to various incoming codes. An alternative would be to use a companding scale, with the finest granularity assigned to the range nearest the predicted guess. Assuming that the prediction method enabled you to make an educated guess about the probabilities of the pixels, you should achieve some data compression.
Most adaptive schemes rely on using just a few of the surrounding pixels as part of the calculation for probabilities of the upcoming pixel. In Figure 11.1, the pixel to be encoded is shown at position 0,0. Pixels that are most commonly used when calculating probabilities are shown at positions A, B, C, and D. Predictions about the upcoming value of the target pixel can be made based on any of several predicting equations:
Figure 11.1 Pixels used for adaptive coding.
Figure 11.2 Pixel predictors.
These techniques use previous data to calculate the most likely value of the target pixel, and they adjust the coding scheme accordingly. While these calculations produce good results, once again they are certainly not the order of magnitude needed to perform effective compression.
|Previous||Table of Contents||Next|