The fundamentals of signal processing are important for users of test instrumentation. To understand dynamic range, it helps to start with the basics of digitizing data. As an analog signal is digitized by the analog to digital converter (ADC) of an instrument, each point of data is converted to digital samples at a fixed sample rate. For example, an instrument sampling at 100,000 samples per second (100 ksps) records a digital data point 100,000 times a second. At this sample rate, the spacing between sample points (Change in “t”) is 1/100,000 = 0.00001 s. Note that we often use ksps rather than kHz when discussing sample rate to help differentiate between bandwidth and sample rate.
When that data is converted at equal time intervals (change in “t”), the analog to digital converter (ADC) assigns a value to each sample. ADCs are specified by the number of bits that they generate per sample. Data Physics was at the forefront of this technology, being one of the first suppliers to adopt 24-bit ADCs, and all Data Physics DSP products use 24-bit digitizers. Today most other analyzer suppliers also use 24-bit ADCs and it would be difficult to justify that anyone should decide to buy a 16-bit analyzer.
An ADC with 24-bit resolution means that there are 224 integer values that the digitizer can assign to the sample. In other words, the full range of the analog input voltage can be divided into 224 values that can be assigned to each sample. To give you a better idea, at +/- 1V input range, there is an available value to assign every 0.119 V (0.000000119 V)! That is a significant amount of resolution between sampled data values. To fully appreciate resolution, the following table compares resolution for different number of bits per sample:
Accuracy, though closely related to resolution, is different from resolution in that accuracy depends on a number of factors including linearity, distortion, noise, and resolution. In other words, a perfectly linear ADC that does not distort and has no internally generated noise is accurate to its least bit.
Dynamic range is the ratio between the highest signal that can be measured and the lowest signal that can be measured when they are present together. Most often dynamic range is specified in dB. As the ADCs become more accurate, the noise floor of the instrument – the circuitry through which the signal arrives at the ADC – becomes the limiting factor because the internal noise of the instrument is now higher than the resolution of the ADC. Dynamic range, rather than the number of bits, then becomes the best expression of available measurement range for a given system.
The primary benefit of high dynamic range is that very low-level signals can also be measured in the presence of high level signals. In fact, some customers have purchased SignalCalc analyzers simply because they could now see signals that they had never before been able to examine. A very practical side benefit to the user, particularly in structural test applications, is that that constant adjustment of many input voltage ranges is no longer needed to obtain accurate measurements.
It is important to realize that while dynamic range is a primary specification, not all instrument manufacturers specify their instruments the same way. Additionally, some manufacturers are more creative than others in publishing specifications and there may even be variability between instruments from the same manufacturer. Because of this, the best way to verify the dynamic range of an instrument is simply to measure it. The most basic test is to simply ground a terminal and observe the drop in dB scale of the noise floor from full range. A 1V range is used most often. The plot below shows the dynamic range of the Abacus across its full frequency bandwidth. Another effective test, which also involves the systems DAC, is to loop the output to the input and observe two sine waves, one at full range, and one at the smallest observable level.