Testing Modern Radios

Several techniques are now being employed to ensure efficient communications across the chaotic radio spectrum. Chief among them is Software Defined Radio (SDR), which uses software to dynamically control communications parameters such as the frequency band used, modulation type, data rates, and frequency hopping schemes. The United States Department of Defense is driving the development of SDR technology through its multi-billion-dollar Joint Tactical Radio System (JTRS) program that employs SDRs for applications in a wide range of footprints, from compact, portable units, to vehicle-mounted and shipboard platforms.

Figure 1. Frequency hopping signals jammed by large interference. The signal was captured off the air by an RTSA using Digital Phosphor Technology (DPX).

A number of commercial applications have also surfaced that utilize many of the SDR technologies used by the defense electronics industry. Despite the wide variety of SDR applications and footprints, one trait is common among them: frequency hopping. Employed in analog as well as SDR systems, frequency hopping is used to:

  • Avoid detection,
  • Mitigate jamming and interference, and
  • Improve performance in an environment with multi-path and fading.

Frequency hopping is utilized in conjunction with coding to spread the information over a wide spectrum of frequencies, making systems more robust. If a particular frequency is jammed, the system may lose only the information being transmitted at that frequency, rather than an entire data stream. In these circumstances, interleaving and forward error correction (FEC) can be used to recover data lost during the jammed hop. While frequency hopping is a proven method for improving radio communications, its use continues to evolve. The faster a signal hops, the less likely it is to face detection, interference, or jamming. So although frequency hopping is not a new technique, military and civil defense entities as well as the consumer market are continually striving to increase the speed of frequency hopping in modern radios to further improve and reinforce performance.

These efforts have led to notable design and test challenges. Frequency hopped signals and interference sources operate in extremely complex, time varying spectrums. The erratic behavior of these signals can make them difficult to acquire, verify, and measure. Effectively designing and testing modern radios that employ increasingly fast frequency hopping techniques requires new tools and methodologies.

Evolving Design and Test Challenges

Faster frequency hopping poses a number of challenges when designing communication systems, especially the system architecture and frequency synthesizers. Modern radios are complex systems, and the controlling software, digital signal processor (DSP), and system components all must work in concert to ensure optimum performance.

Figure 2. An integrated, end-to-end test system for verifying and troubleshooting SDRs, featuring a Tektronix RTSA, arbitrary waveform generator, oscilloscope, and logic analyzer.

Because software actively alters a radio’s operating parameters, there are countless hardware/software combinations that can cause errors. Modulation and filtering transients, distortion, nonlinear power effects, pulse aberrations, frequency tuning and settling, power supply coupling, digital-to-RF couplings, and software-dependent phase errors are common. Designing fast frequency synthesizers presents a significant challenge as well.

Impaired modulation quality due to frequency settling of hopped carriers is one of the primary sources for poor transmitter quality and low system data rates. In the past, designers were able to use conventional test equipment to demodulate stationary carriers located at the center frequency of their modulation analyzer. Unfortunately, conventional test equipment is not capable of demodulating today’s wideband hopped signals. Because these signals hop over the band of operation, analyses of off-center frequencies are required to ensure optimum modulation quality. The dynamic generation of RF waveforms through DSP, and the integration of digital and RF circuits often on the same integrated circuit (IC), also create issues not seen in traditional RF transceiver designs.

Some of the error sources found in SDRs include modulation transients, nonlinear effects of amplifiers, and digital- to-RF crosstalk. The performance of SDR transmitters must be verified with measurements that are beyond the traditional RF transmitter conformance tests. Simply passing these tests does not ensure a device will work properly, and system behavior must be carefully and thoroughly observed since software is continually changing system parameters.

Test Solutions

Truly addressing these challenges requires SDR designers to fully analyze and characterize their systems. Discovery of true system behavior is important to identify potential RF spectrum anomalies. As system parameters change over time, performing frequency-selective triggering is necessary to pinpoint the instant a transient event occurs. Performing time-correlated analysis in multiple domains is required to determine the specific cause of each problem. Capturing the entire event seamlessly into memory is valuable for subsequent analysis, as it can be difficult to recreate the conditions under which the transient occurred. These advanced troubleshooting methods of verifying signal performance over time, combined with traditional conformance tests performed under steady-state conditions, are necessary for comprehensive SDR testing.

Having a verified system architecture design is vital to the success of a modern communication system. The more access points that are tested and verified, the less likely it is that issues will manifest during the last system integration phase. Some of the major contributors to system failures are DSP, RF circuitry, and the controlling software. A verification debug tool will greatly aid system designers in effectively discovering problems. Once an error has been identified, it must be isolated and understood. To isolate a problem and determine its root cause, it is important to time-correlate the error back through the signal path. Since the signal information changes form in an SDR design from digital words to continuously variable analog voltages, several pieces of test equipment may be needed to diagnose the exact source of problems.

Figure 3. Real-time spectrum emission mask testing allows the designer to understand radio compliance during all radio modes, including transmitter turn-on. This transmitter passes the mask test in steady-state operation (left), but non-compliance can be seen at transmitter turn-on and 352 ms after turn-on (right).

Because the problem may occur at any point in the signal path, and memory capacity in oscilloscopes and logic analyzers is limited, the ability to simultaneously trigger multiple test instruments and capture the exact moment in time that the event occurs is important. This requires that each instrument be able to trigger in their domain (logic analyzers for digital triggers, oscilloscopes for time domain amplitude triggers, and spectrum analyzers for frequency domain triggers) and that trigger latency between each instrument is deterministic.

An integrated, end-to-end test system comprising a real-time spectrum analyzer (RTSA), arbitrary waveform generator (AWG), oscilloscope, and logic analyzer can be invaluable for testing SDRs (Figure 2). These instruments are able to work in unison with cross-triggering and time-correlated subsystem views to verify SDR performance and perform multiple test procedures at the physical and various software layers. The test system can also be used to understand the complex interactions between SDR subsystems in the frequency and time domains, especially in bursted or frequency hopped signals. When filtered and amplified, software anomalies can create temporal RF impulse bursts of energy at the RF output. To isolate software and hardware performance, the RTSA can be used to trigger on transients in the frequency domain, capture the events into memory, and drive the other test instruments to probe possible error sources.

The unique ability of the RTSA to find problems from spectral transients can be used to trigger the other instruments and obtain time-correlated views of vastly different hardware and software functional implementations. For example, the RTSA can capture the signal in the RF and IF portions of the signal paths, and a logic analyzer can capture the digital baseband signal and compare it to the Symbol Table produced by the RTSA. Furthermore, the RTSA’s off-line software (RSAVu) can be used to analyze acquired data from the logic analyzer and oscilloscope, allowing hardware and software measurement correlation.

Transmitter Measurements

SDR designs that include compatibility with legacy modulation formats present a number of unique test challenges. First, a complete set of traditional modulation and distortion measurements is required to characterize compliance to the replacement transmitter. Such measurements include FM deviation or depth of AM modulation, as well as S/N, SINAD, THD, and Total Non-Harmonic Distortion (TNHD).

These latter measurements require the spectrum analyzer to include specific audio filters and de-emphasis settings to produce valid results. For some transmitters, measurements such as hum and noise may also be required to understand the amount of unintended modulation present in the transmission from sources other than the originating test signal.

While compliance measurements ultimately boil down to meeting numeric test values, failure to meet specifications requires designers to understand the nature of the distortion in order to isolate the problem. This is where an RTSA can significantly improve time-to-insight in resolving critical design issues. Besides offering all these traditional distortion measurements in a single instrument, real-time analyzers offer the ability to view the audio spectrum to understand which harmonic and non-harmonic components contribute most to the distortion metric. In addition, viewing the spectrogram of the demodulated audio can give the designer insight into which distortion products are at the source of non-compliant behavior. By viewing the time-varying nature of the audio signal, such as during transmitter turn-on, the audio spectrogram shows when and where distortion products first appear and how they behave over time (Figure 3). Tools such as time-correlated markers allow the user to scroll though the spectrogram display to see how both the spectrum and audio distortion parameters change over time allowing the designer to isolate the problem faster compared to traditional distortion analyzers.

A second test challenge involves characterizing radios utilizing high modulation indices (peak deviation/modulating signal). Modems for data and telemetry systems in which wideband FM modulation (>1 MHz deviation) is employed routinely have modulation indices exceeding 1000, requiring the spectrum analyzer to not only possess enough demodulation bandwidth, but also to have enough resolving power in the demodulated audio signal to compute SINAD and THD. Real-time analyzers offer high-resolution modes through the use of variable-length FFTs, allowing users to see how the modulating signal changes with increases in modulation rate.

A final challenge exists for designers of multi-level FSK radios. FSK is a legacy modulation format used in SINCGARS (Single Channel Ground and Airborne Radio System) and other VHF radios for secure tactical military communications. While validation of modulation compliance to a standard may be a straightforward task of measuring rms and peak FSK errors, designers need additional tools that allow them to investigate the cause of non-compliance. Such tools allow the engineer to investigate frequency deviations for each symbol point, as well as symbol timing errors over any portion of the transmission. An RTSA consolidates a number of FSK measurements on one display to speed the task of RF debug and design compliance (Figure 4).

Frequency Settling Time of Hopped Signals

Frequency settling time defines the length of time between two hopped frequencies. It is one of the primary contributors to a frequency hopping system’s efficiency. The shorter the frequency settling time, the faster a system can hop. Measuring the frequency settling time ensures optimum synthesizer operation and maximizes overall system performance.

The traditional way of measuring frequency settling time was limited by the instrumentation and was very time-consuming. Engineers were forced to rely on oscilloscopes and frequency discriminators for the test, showing only the signal envelope and hinting at the stability of the signals. While oscilloscopes have excellent timing resolution, using them to measure small frequency changes can be challenging depending on the frequency resolution required for the measurement. Oscilloscopes cannot automatically measure hopped frequencies, and frequency settling time can only be estimated.

Figure 4. RTSAs display numerous FSK and symbol timing errors to speed RF debug. Note how all relevant FSK parameters are displayed on one screen, including symbol timing error.

By setting parameters such as frequency settling threshold and smoothing factor, engineers can measure the frequency settling time for hopped signals quickly and accurately. Engineers can also see the spectrum changes during the hops. In addition to time-correlated measurements across multiple domains, Tektronix RTSAs feature Digital Phosphor Technology (DPX) and Frequency Mask Trigger (FMT). These unique features simplify the troubleshooting of frequency hopping signals. DPX technology gives engineers a tool to instantly discover problems. In allowing users to view live RF signals for the first time, DPX provides insight into RF signal behavior.

Receiver Sensitivity Measurements

SINAD is perhaps the most popular sensitivity metric in receiver testing. SINAD can be described in general terms as the ratio of total signal energy to the energy contained in the noise and distortion components of the signal. Reference sensitivities of 10 or 12 dB SINAD are popular metrics with tactical radio manufacturers. These tests describe the level of sensitivity that results in good intelligibility of the transmitted signal. Besides reference sensitivity, SINAD is used as the basis for many other receiver measurements such as signal displacement bandwidth, adjacent channel rejection, offset channel selectivity, and various rejection measurements. For each of these tests, SINAD is the fundamental metric for verifying design compliance. Though the RTSA performs SINAD automatically, the designer can tailor the way the measurement is performed by setting the total number of harmonics and the threshold levels to include in the measurement.

Summary

Software defined radios that integrate legacy and modern modulation schemes present unprecedented test challenges that conventional test instruments are unable to perform. Those that employ frequency hopping techniques complicate the design and validation tasks even further. These radios require a new, flexible, integrated approach to SDR subsystem and system validation. RTSAs deliver an all-in-one solution to improve the designer’s time-to-insight and lower the manufacturer’s cost of test.

This article was contributed by Tektronix, Beaverton, OR. For more information, Click Here