Previous Table of Contents Next


Eventually, the sound needs to be played back. This is done via another electronic component that is the converse of the ADC: the digital-to-analog converter (DAC). The DAC is responsible for taking a digital value and converting it to a corresponding analog signal. To be effective, the conversion process needs to be the mirror image of that performed when converting the analog signal to digital. While the exact voltages produced at the output of the DAC do not need to be identical to those seen at the input, they do need to be proportional to one another so that one waveform corresponds to the other. In addition, the samples need to be output at exactly the same rate that they were read in. Any deviation here will cause the output frequencies to be shifted up or down from the input, generally not a good thing.

Figure 10.4 shows the output of the DAC when given the same set of samples produced in Figure 10.2. At first glance, it seems that this is a radically different waveform. All the nice, smooth shapes shown in the earlier figures are gone, replaced by this stair-step, rectangular, artificial-looking creation.


Figure 10.4  DAC output

Fortunately, Figure 10.4 is not that far removed from Figure 10.1. Mathematically, the sharp jumps that occur when we move from sample to sample represent high-frequency components in the output signal. These can (and must) be eliminated from the signal by means of a low-pass filter that lies between the output of the DAC and the final amplification stage of the audio output.

A low-pass filter is a network of electrical components designed to let frequencies below a certain value pass through it unhindered, while attenuating frequencies above that point. An ideal low-pass filter used with the samples shown here would completely stop any frequency above 4KHz and let frequencies below 4KHz pass through with no attenuation.

In practice, low-pass filters don’t work perfectly, but even a low-budget filter can take Figure 10.4 and create a nearly indistinguishable copy of Figure 10.1. Without the filter, the sound sample will still be intelligible, but it will be filled with distracting high-frequency “noise” that is part of the reproduction process.

Figure 10.5 shows the same figure when the sampling rate has been stepped up to a much higher rate. This increase in sampling rate clearly does a more accurate job of reproducing the signal. The next section discusses how variations in these parameters affect the output signal.


Figure 10.5  Sampling at a much higher rate.

Sampling Variables

When an audio waveform is sampled, two important variables affect the quality of the reproduction: the sample rate and the sample resolution. Both are important factors, but they play different roles in determining the level of distortion produced when a sample is played back.

The sample resolution is simply a measure of how accurately the digital sample can measure the voltage it is recording. When the input range is -500mv to +500mv, for example, an eight-bit ADC can resolve the input signal down to about 4mv. So an input signal of 2mv will either get rounded up to 4mv or down to 0mv. This is called a quantization error.

Figure 10.6 shows the results of quantization error when sampling a waveform. In some cases the sample point has a larger magnitude than the audio signal, but in other places it has less. When the digitized signal is played back through a DAC, the output waveform will closely track the sample points, resulting in a certain amount of distortion.


Figure 10.6  Quantization error when sampling a waveform

It might seem that eight bits should be enough to accurately record audio data, but this may not be the case because of the large dynamic range of audio the human ear can detect. If our 500mv range example were used, we might find that our input signal magnitudes range from 1mv to 500mv in a single recording session. The crash of drums in an orchestra could push the ADC to its limits, while a delicate violin solo may never go outside 5mv. If the minimum digital resolution is only 5mv, a very noticeable level of distortion will be introduced during this part of a recording session.

The sampling rate plays a different role in determining the quality of digital sound reproduction. One classic law in digital signal processing was published by Harry Nyquist in 1993. He determined that to accurately reproduce a signal of frequency f, the sampling rate has to be greater than 2*f. This is commonly called the Nyquist Rate.

The audio signal in Figure 10.7 is being measured at a considerably slower rate than that shown in the previous examples, with noticeably negative consequences. At several places in the waveform it is not even sampled a single time during an excursion above or below the center line.


Figure 10.7  A slower sampling rate.

Figure 10.8 shows the waveform we could expect after playing back the digitized samples stored from Figure 10.7. Clearly, after the digitized output is filtered, the resulting waveform differs quite a bit from that shown in the previous figure. What has happened is that the high-frequency components of the waveform have been lost by the slower sampling rate, letting only the low-frequency parts of the sample through.


Figure 10.8  The waveform after playing back digitized samples.

The human ear hears sound up to 20KHz, which implies that we need to sample audio to 40KHz or better to achieve good reproduction. In fact, the sampling rate used for digital reproduction of music via compact disk or digital audio tape is 44KHz, using sixteen-bit samples. The quality of sound achieved at this sampling rate is generally acknowledged to be superior.

This does not mean that all digital recordings have to be done at 44KHz rates. Virtually every digital phone system in the world uses an 8KHz sampling rate to record human speech, with generally good results. This means that the phone system is unable to pass any signal with a frequency of 4KHz or higher. This clearly does not render the system useless—millions of long-distance calls over digital lines are made every day. The average speech signal is composed of many different frequencies, and even if everything above 4KHz is discarded, most of the speech energy still makes it through the system. Our ears detect this loss as a lower-fidelity signal, but they still understand it quite well.

The ultimate test of all this is how the audio output sounds to our ears. It is difficult to quantify a “quality of sound” measurement is strictly mathematical terms, so when discussing audio output, it is always best to temper judgments with true listener trials.


Previous Table of Contents Next