Abstract Accurate perception of the movement and appearance of vehicles depends on the robustness and reliability of the extrinsic parameters calibration in a multi-sensor fusion scenario. However, conventional calibration methods require manual acquisition of prior information, leading to high labor costs and low calibration accuracy. Therefore, we proposed an automatic coarse-to-fine calibration method for roadside radar and camera sensors to lower costs and improve accuracy. Next, an association strategy based on fluctuating traffic volumes was also developed to assist in robust target matching during the coarse-to-fine calibration process. Finally, extrinsic parameters between the radar coordinate system and camera coordinate system were calibrated through double rotations of the position vectors obtained from each system. To verify the proposed method, an experiment was conducted on a pedestrian bridge using an uncalibrated 4D millimeter-wave radar and a traffic monocular camera. The results showed that our proposed method reduced the interquartile range of the roll angle by 41.5% compared to a state-of-the-art neural network method. It also outperformed the manual calibration method by 2.47% in terms of the average reprojection error.
Time-Sensitive Networking (TSN) is appealing to Industrial Internet of Things (IIoT) due to its support for deterministic real-time communication based on Ethernet. The deterministic transmission of time-sensitive traffic is achieved through a time-triggered mechanism, which requires precise schedule synthesis. However, existing approaches mostly focus on the static schedule, which cannot guarantee rapid response to dynamic requirements due to long runtime. Moreover, the multicast scheduling problem required for distributed applications in IIoT has been ignored. In this paper, we formulate the joint routing and scheduling problem for multicast time-sensitive traffic based on Integer Linear Programming (ILP) and extend it to cluster-ILP (CILP) to accelerate schedule synthesis. Firstly, an improved topology pruning is introduced to reduce the scale of the scheduling problem. Then flows are divided into several groups and schedules are calculated for flows in different groups incrementally. The evaluation results indicate that our proposed algorithm can reduce the runtime by 86.7% on average while ensuring the performance of resource utilization, thus increasing the schedulability for dynamic applications in TSN.
Past few years have witnessed the great potential of exploiting WiFi signals for positioning. Prior work focus on discovering the absolute locations of a radio source, and have achieved promising accuracies of tens of centimeters. However, many applications such as aerial gesture or handwriting tracking are more concerned with the detailed motion shape of the target rather than its exact locations, which require a several fold higher accuracy. To this end, we present MagicInput, a virtual handwriting interface by tracking the motion traces of a WiFi source. Based on channel state information (CSI), MagicInput elaborately devises an incremental motion-based tracking model by correlating the motion traces with the angle and length variations of propagation paths. The model shifts the tracking task from the transceiver view to the antenna array-oriented view, and eliminates the need for prior knowledge of anchor locations. MagicInput proposes an end-to-end pipeline for tracking refinement, by interference suppression, motion segmentation, and an integrated grasp pressure sensor-based motion instance detection. We prototype MagicInput using off-the-shelf WiFi radios, and extensive experiments attest that MagicInput can achieve the accuracy of 8.5 mm confronting diverse users and environment conditions. With ubiquitous WiFi signals, MagicInput can transform any region into an interactive handwriting interface with millimeter accuracy.
Due to the limited battery resources of mobile clients, uplink energy-efficient communication is becoming increasingly important. In this paper, we propose a convex pricing-based resource allocation algorithm to improve the energy-efficiency of two-tier co-channel femtocell networks. We model the subchannel allocation and power control problem as a non-cooperative game, where both the transmit power and circuit power are considered. To reduce the complexity while maintaining good performance, we decompose the power control and subchannel allocation problem into two sub-problems. A suboptimal subchannel allocation algorithm and a optimal power control algorithm are proposed to resolve the non-cooperative games. Numerical results show the proposed resource allocation schemes are effective in increasing energy-efficient performance while keeping low complexity.
Virtualization technology is considered an effective measure to enhance resource utilization and interference management via radio resource abstraction in heterogeneous networks (HetNet). The critical challenge in wireless virtualization is virtual resource allocation on which substantial works have been done. However, most existing researches on virtual resource allocation focus on improving total utility. Different from the existing works, we investigate the dynamic‐aware virtual radio resource allocation in virtualization based HetNet considering utility and fairness. A virtual radio resource management framework is proposed, where the radio resources of different physical networks are virtualized into a virtual resource pool and mobile virtual network operators (MVNOs) compete for virtual resources from the pool to provide service to users. A virtual radio resource allocation algorithm based on biological model is developed, considering system utility, fairness, and dynamics. Simulation results are provided to verify that the proposed virtual resource allocation algorithm not only converges within a few iterations, but also achieves a better trade‐off between total utility and fairness than existing algorithm. Besides, it can also be utilized to analyze the population dynamics of system.
Owing to its excellent flexibility, unmanned aerial vehicles (UAV) have been widely adopted as aerial access points to provide data collection services for the Internet of Things (IoT) devices. Moreover, Millimeter-Wave (MmWave) aided UAV communications to achieve extremely high data rate has also become the hot issue in recent research. Codebook designing and beam training are the enabling technologies of MmWave communications. However, the existing solutions for communication systems cannot be directly applied in the UAV Mmwave communications, because of the increased complexity in moving and 3D UAV scenarios. Therefore, this paper focuses on joint codebook design and beam training. Firstly, a 3D codebook is designed which can provide flexible access for IoT devices and achieve the optimal system throughput. Then, based on the designed codebook, a Angle Forecast based Fast Beam Alignment (AFFBA) mechanism is proposed. This mechanism infers the potential angle range of ideal AoD from the beam adopted by the current serving UAV. Combing the angle range and the trajectory model of UAV, the optimal beam is forecasted. The proposed joint 3D codebook design and beam training significantly reduce the dimension of beam sweeping space. Simulation results demonstrate the superior performance of the proposed mechanism, and show that the proposed mechanism significantly reduce the beam sweeping space and effectively improve the normalized spectral efficiency (NSE) compared to existing exhaustive training mechanism.
Mobile edge caching is regarded as a promising way to reduce the backhaul load of the base stations (BSs). However, the capacity of BSs’ cache tends to be small, while mobile users’ content preferences are diverse. Furthermore, both the locations of users and user-BS association are uncertain in wireless networks. All of these pose great challenges on the content caching and content delivery. This paper studies the joint optimization of the content placement and content delivery schemes in the cache-enabled ultra-dense small-cell network (UDN) with constrained-backhaul link. Considering the differences in decision time-scales, the content placement and content delivery are investigated separately, but their interplay is taken into consideration. Firstly, a content placement problem is formulated, where the uncertainty of user-BS association is considered. Specifically, different from the existing works, the specific multi-location request pattern is considered that users tend to send content requests from more than one but limited locations during one day. Secondly, a user-BS association and wireless resources allocation problem is formulated, with the objective of maximizing users’ data rates under the backhaul bandwidth constraint. Due to the non-convex nature of these two problems, the problem transformation and variables relaxation are adopted, which convert the original problems into more tractable forms. Then, based on the convex optimization methods, a content placement algorithm, and a cache-aware user association and resources allocation algorithm are proposed, respectively. Finally, simulation results are given, which validate that the proposed algorithms have obvious performance advantages in terms of the network utility, the hit ratio of the cache, and the quality of service guarantee, and are suitable for the cache-enabled UDN with constrained-backhaul link.
The interaction of intrusion detection and firewall is hot in the research of network security. Almost all of the realized system limit to IPv4 networks. This paper presents an IPv6-based distributed network security prevention system which is combined with firewall and intrusion detection system. By implementing intrusion detection agent and concentrated control server,system obtains the intrusion message and analyses it,and adjusts the rules and policies of firewall initiatively. It realizes the interaction of intrusion detection and firewall based on IPv6/IPv4. The test results show that the system is effective and dependable.
The radio resource allocation problem is studied, aiming to jointly optimize the energy efficiency (EE) and spectral efficiency (SE) of downlink OFDMA multi-cell networks. Different from existing works on either EE or SE optimization, a novel EE-SE tradeoff (EST) metric, which can capture both the EST relation and the individual cells' preferences for EE or SE performance, is introduced as the utility function for each base station (BS). Then the joint EE-SE optimization problem is formulated, and an iterative subchannel allocation and power allocation algorithm is proposed. Numerical results show that the proposed algorithm can exploit the EST relation flexibly and optimize the EE and SE simultaneously to meet diverse EE and SE preferences of individual cells.
Widespread employment of renewable energy such as wind and solar pushes power grids to move towards comprehensive data and predictive analysis. At present, a large number of researches have been conducted especially on machine learning methods to achieve load forecast. However, premature convergence and redundant iteration are two major defects of existing machine learning-based load forecasting methods, resulting in poor prediction effect and high time consumption. In this paper, a novel combined intelligent optimization algorithm based on genetic algorithm (GA), artificial fish swarm algorithm (AFSA) and particle swarm optimization (PSO) is proposed for optimizing machine learning-based load forecasting models. By replacing GA's mutation process with AFSA operator and PSO operator, the proposed algorithm named GA-AFSA-PSO Algorithm (GAPA) enhances both global search ability and local search ability, leading to its high prediction accuracy and fast convergence speed. To validate its effectiveness, GAPA is applied to the optimization of support vector machine (SVM) and artificial neural network (ANN) to predict one-day ahead load data. Moreover, two different sets of comparative tests are carried out to confirm the advantages of GAPA. The simulation results illustrate that, compared with GA, AFSA, PSO, AFSA-GA and GA-PSO, GAPA brings forth advancement in prediction accuracy, convergence rate and global search ability.