Software defined radio is a widely accepted paradigm for design of reconfigurable modems. The continuing march of Moore's law makes real-time signal processing on general purpose processors feasible for a large set of waveforms. Data rates in the low Mbps can be processed on low-power ARM processors, and much higher data rates can be supported on large x86 processors. The advantages of all-software development (vs. FPGA/DSP/GPU) are compelling - much wider pool of talent, lower development time and cost, and easier maintenance and porting. However, very high-rate systems (above 100 Mbps) are still firmly in the domain of custom and semi-custom hardware (mostly FPGAs). In this paper we describe an architecture and testbed for an SDR that can be easily scaled to support over 3 GHz of bandwidth and data rate up to 10 Gbps. The paper covers a novel technique to parallelize typically serial algorithms for phase and symbol tracking, followed by a discussion of data distribution for a massively parallel architecture. We provide a brief description of a mixed-signal front end and conclude with measurement results. To the best of the author's knowledge, the system described in this paper is an order of magnitude faster than any prior published result.
A software defined radio based on the GNURadio (GR)[1][11] framework has been developed. The radio can be used for real-time receiving or transmitting of signals, or for simulating different scenarios. The radio combines native GR blocks with custom blocks and runtime-configurable flowgraph generation. The flowgraph can be defined and monitored using a comprehensive user interface (GUI), or from a JSON configuration file. The radio supports a wide range of waveforms from standard PSK to more unusual U/AQPSK and OQPSK. An integrated channel model includes effects of nonlinear power amplifier, phase noise, frequency offset, thermal noise, and Doppler. The GUI allows the user to quickly instantiate multiple instances of the SDR to study interaction between adjacent channels. Further, as waveforms get more complex and rely on powerful error correction, the effect of an adjacent channel interferer on a victim signal becomes more difficult to predict. The paper presents detailed block diagrams of the radio in different configurations.
Software-based radios implemented on general-purpose processors are cheaper, easier, and faster to develop, maintain, and upgrade than hardware-based equivalents. Unfortunately, today's general-purpose processors are not fast enough to process certain waveforms. The current alternative is to use application-specific integrated circuits (ASICs) or field-programmable gate arrays (FPGAs) for all the signal processing. These operate at very high speeds but are dramatically more expensive and require lengthy development cycles. A single, often small, block is responsible for the bulk of signal processing in a large class of radios. It is therefore very wasteful to implement all the signal processing in a high-speed device such as an ASIC or FPGA. In this paper, we present a software-centric architecture that offloads only the most computationally expensive tasks to an FPGA. The resultant system combines the advantages of both platforms, while minimizing the disadvantages of each. An open-source software radio platform is combined with a consumer off-the-shelf (COTS) FPGA development board to create a hardware-accelerated multichannel receiver. The FPGA is efficiently utilized by partitioning the device into multiple accelerator regions and taking advantage of runtime partial reconfiguration to reconfigure each region as needed during operation. Comparisons between a software-only receiver and a hardware-accelerated implementation are performed.
Next-generation communications satellite constellations will use advanced radio frequency (RF) technologies to provide Internet protocol (IP) packet-switched high-speed backbone transport services for various user communication applications, with ever-increasing traffic demand. These applications range from data services to imagery, voice, video, and other potential emerging applications. Satellite uplinks and downlinks may endure channel impairments that have fades of varying durations due to weather, communications on the move (COTM) blockages, scintillation, terrestrial multipath, or jamming. Satellite payloads and ground terminals must be able to mitigate this wide range of impairments and optimize the use of available spectrum to deliver the highest possible data rates while maintaining a required quality of service (QoS). A suite of mitigation techniques-including channel interleaving and forward error correction (FEC) in the physical layer, dynamic coding and modulation (DCM) and automatic repeat request (ARQ) in the data link layer, and application codec adaptation (ACA) in the application layer-has been proposed for various channel fades. Since each mitigation strategy could potentially interact with another, it is essential not only to assess the performance of each mitigation technique, but also to understand how multiple cross-layer techniques work together. This paper describes an emulation study of channel impairment mitigation using a combination of dynamic modulation (DM), ARQ, and ACA for various channel fades. A real-time emulation test bed was established by integrating satellite-to-terminal real-time Ethernet configurable hardware (STRETCH) and satellite link emulator (SALEM) test beds, both of which are unique capabilities developed in-house at The Aerospace Corporation. STRETCH provides modulation/demodulation, coding, interleaving, and various types of channel fading. SALEM implements a range of mitigation techniques, such as ARQ, DM, and ACA. ARQ retransmissions are triggered in the absence of acknowledgment, DM is invoked upon SNR changes, and ACA is called when the available data rate changes. Results show that DM and ACA successfully mitigate channel fades of longer durations. Faster fades with fluctuating channel gains but a steady SNR average over a given time window do not trigger DM, but endure bit errors and packet drops caused by instantaneous low SNR values. ARQ retransmissions successfully mitigate these types of channel fades. This paper presents descriptions of the test bed architecture, mitigation techniques, test scenarios, and test results.
Phase noise is a complex and ubiquitous source of BER degradation for all communication systems. Tight interaction between the phase noise and implementation of receiver tracking loops limits the fidelity of analytical derivations. The long timescales associated with phase noise make software simulations extremely time consuming. Both of these difficulties are exacerbated in systems that employ forward error correction. This paper reports on the development of a hardware-assisted phase noise emulator. This emulator allows injection of wideband, completely programmable phase noise into a real-time system. The phase noise samples are computed entirely in software and then passed to a hardware engine. The hardware engine digitizes a low-IF signal, combines it with the computed phase noise, and re-generates an analog low-IF output. The system described in this paper supports signals up to 30 MHz wide, centered at a 70 MHz IF. The phase noise characteristics can be modeled accurately up to 10 MHz using a simple model and up to 1 MHz for a more complex model.