In this work, we consider a 1-bit quantized Massive MIMO channel with superimposed pilots (SP), dubbed QSP. With linear minimum mean square error (LMMSE) channel estimator and maximum ratio combining (MRC) receiver at the BS, we deive an approximate lower bound on the ergodic achievable rate. When optimizing pilot and data powers, the optimal power allocation maximizing the data rate is obtained in a closed-form solution. We demonstrate that quantization noise is not critical and the data rate converges to a deterministic value as the number of BS antennas grows without limit. When considering a 1-bit quantized Massive MIMO channel with multiplexed-pilots (QTP), it is shown that QSP can outperform the non-optimized QTP in many cases. Moreover, we demonstrate that the gap between QSP, with and without pilot removal after estimating the channel, is not significant. Numerical results are presented to verify our analytical findings and insights are provided for further research on SP.
Massive connectivity and low latency are two important challenges for the Internet of Things (IoT) to achieve the Quality of Service (QoS) provisions required by the numerous devices it is designed to service. In this paper, we propose a new method to achieve lower outage probability in millimeter wave -NOMA systems with massive multiple-input multiple-output (MIMO) structure. We obtain a closed-form expression to minimize the outage probability. In the proposed method, random paring scheme is applied so that it eliminates the need of CSI to reduce the overhead of the system and achieve the QoS. The results demonstrate that the proposed cellular machine-to-machine (M2M) communication system with the mmWave massive-MIMO-NOMA transmission scheme improves outage probability compared to previous works.
Fast and secure network connectivity is essential for emerging Internet of Things (IoT) applications and services. Distributed Blockchain technology offers compelling opportunities to solve IoT security, privacy, and reliability issues. However, using blockchain in IoT networks introduces a new set of issues, including high computational overhead, latency, and bandwidth overhead that can adversely affect the proper operation of the IoT. This paper proposes a new lightweight blockchain-based architecture for 5G-enabled IoT. This architecture combines small cells, network functions virtualization (NFV), software-defined networking (SDN), and distributed clouds with a lightweight Blockchain consensus algorithm to guarantee high availability, real-time data delivery, security, resiliency, and low latency. We compare the performance of the proposed architectures in six different scenarios with conventional 5G in terms of processing time and power consumption. The proposed structure added data overload, superior hashing, and encryption protocols to conventional 5G to improve the security level of IoT networks against data manipulation and fraud. However, it reduces the traffic passed through the Internet and local transaction processing time.
Generalized Frequency Division Multiplexing (GFDM) is a promising multi-carrier modulation scheme for next-generation wireless communication systems. GFDM uses a non-orthogonal pulse shape, which leads to self-interference among sub-carriers. In this paper, we propose a new coded GFDM method to reduce the self-interference in each block by using a space-frequency block coding (SFBC) scheme. To this end, we introduce corrective coefficients based on channel state information and the demodulation matrix for each modulated data. We evaluate the bit error rate (BER) performance of the system by comparing SFBC-GFDM transmission in Rayleigh channels with a matched filter (MF) receiver. Our computer simulations show that the proposed scheme substantially improves the system performance in BER. Per our results presented in this paper, we achieve 5 dB performance improvement at BER of $10^{-2}$ compared to previous counterparts.
In this paper, we propose and study the distributed blind adaptive algorithms for wireless sensor network applications. Specifically, we derive a distributed forms of the blind least mean square (LMS) and recursive least square (RLS) algorithms based on the constant modulus (CM) criterion. We assume that the inter-sensor communication is single-hop with Hamiltonian cycle to save the power and communication resources. The distributed blind adaptive algorithm runs in the network with the collaboration of nodes in time and space to estimate the parameters of an unknown system or a physical phenomenon. Simulation results demonstrate the effectiveness of the proposed algorithms, and show their superior performance over the corresponding non-cooperative adaptive algorithms.
In this paper, we propose an optimal random beamforming technique in mmWave MIMO-NOMA transmission systems. In our model, we consider a NOMA system with a user pairing scheme that works based on the distance between the cellular base station (BS) and the active mobile users. The aim of this is to reduce the system overall overhead for massive connectivity and achieve the desired Quality of Service (QoS). We present an optimization problem with the aims to find the optimal random beamforming coefficients by minimizing the outage probability. We analyze the convexity of the outage probability for three user pairing schemes and prove that the proposed optimal random beamforming is convex in high signal-to-noise ratio (SNR). We utilize the gradient-based algorithm namely the gradient descent algorithm to find optimal random beamforming coefficients. Our simulation results demonstrate that the proposed transmission schemes reduce the outage probability and improve network performance compared to previous works.
In this paper, we investigate mmWave large-scale (LS) multiple-input multiple-output (MIMO) non-orthogonal multiple access (NOMA) systems with a sparse transmission scheme. For this system, we consider a NOMA technique with a user pairing scheme and a random beamforming technique to reduce the system overhead and achieve the desired communication link reliability. We formulate an optimization problem to find the optimal random beamforming coefficients and minimize the outage probability of the system. We show that the problem is convex and, therefore, utilize a gradient based algorithm to find the optimal random beamforming coefficients. To optimize the transmission power, we use a subset of the antenna elements in the array with strong beamforming coefficients obtained from the proposed algorithm. Our numerical experiments show that the proposed transmission scheme substantially improves the system performance in terms of outage probability, sum rate, and energy efficiency.
Space-time coded massive (STCM) multiple-input multiple-output (MIMO) system provides superior bit error rate (BER) performance compared with the conventional space-time coding and massive MIMO techniques. The transmitter of the STCM-MIMO system consists of a large antenna array. In a practical system, the self-interference created by the signals transmitted by the elements of this antenna array, known as mutual coupling (MC), degrades the performance of the system. The MC effect is pronounced in communication systems with a large antenna array. On the other hand, increasing the number of transmitting antennas results in improved BER performance. Hence, there is a trade off in selecting the optimum number of transmitting antennas in an STCM-MIMO system. In order to take the impact of MC into account, we have derived an analytical expression for the received signal to accurately model the STCM-MIMO system under the existence of the MC effect. We present an algorithm to select the optimal number of antennas to minimize mutual coupling and the system bit error rate (BER). Through computer simulations, we investigate the BER performance of the STCM-MIMO system for different numbers of array elements.
We study the problem of distributed adaptive estimation over networks where nodes cooperate to estimate physical parameters that can vary over both space and time domains. We use a set of basis functions to characterize the space-varying nature of the parameters and propose a diffusion least mean-squares (LMS) strategy to recover these parameters from successive time measurements. We analyze the stability and convergence of the proposed algorithm, and derive closed-form expressions to predict its learning behavior and steady-state performance in terms of mean-square error. We find that in the estimation of the space-varying parameters using distributed approaches, the covariance matrix of the regression data at each node becomes rank-deficient. Our analysis reveals that the proposed algorithm can overcome this difficulty to a large extent by benefiting from the network stochastic matrices that are used to combine exchanged information between nodes. We provide computer experiments to illustrate and support the theoretical findings.
In this paper, we proposed an unequal spacing technique which can increase the performance resolution of direction of arrival (DOA) estimation algorithms. The linear prediction (LP) algorithm has been tested for the proposed antenna arrangement, dramatic improvement has been obtained. In addition, we investigate the effect of reference element position on the performance resolution of the LP algorithm. It has been found that the performance resolution of the system increases if the reference element is chosen either at the first or last element of the linear arrays. As a conclusion of the work, the performance resolution of M elements antenna with unequal spacing is almost the same as 2M elements antenna with equal spacing.