One-shot object detection (OSOD) aims to detect all object instances towards the given category specified by a query image. Most existing studies in OSOD endeavor to establish effective cross-image correlation with limited query information, however, ignoring the problems of the model bias towards the base classes and the generalization degradation on the novel classes. Observing this, we propose a novel algorithm, namely Base-class Suppression with Prior Guidance (BSPG) network to achieve bias-free OSOD. Specifically, the objects of base categories can be detected by a base-class predictor and eliminated by a base-class suppression module (BcS). Moreover, a prior guidance module (PG) is designed to calculate the correlation of high-level features in a non-parametric manner, producing a class-agnostic prior map with unbiased semantic information to guide the subsequent detection process. Equipped with the proposed two modules, we endow the model with a strong discriminative ability to distinguish the target objects from distractors belonging to the base classes. Extensive experiments show that our method outperforms the previous techniques by a large margin and achieves new state-of-the-art performance under various evaluation settings.
Network densification via deploying dense small cells is one of the dominant evolutions towards future cellular network to increase spectrum efficiency. Packet transmission delay and reliability in the resultant interference-limited heterogeneous cellular network (HCN) are essential performance metrics for system design. By modeling the locations of base stations (BSs) in HCN as superimposed of independent Poisson point processes, we propose an analytical framework to derive the timely throughput of HCN, which captures both the delay and reliability performance. In the analysis, the BS activity and temporal correlation of transmissions are taken into consideration, both of which have significant effect on network performance. The effect of mobility, BS density, and association bias factor is investigated through numerical results, which shows that network performance derived ignoring the temporal correlation of transmissions is optimistic.
To meet the ever-increasing demand for mobile video services, one of the effective solutions is caching some popular videos in edge nodes. In this paper, we propose an online stochastic learning algorithm with two time scales for joint caching and transmission optimization in the video frame level. To overcome the drift distortion caused by the dependency among video frames, the transmission process is formulated as an infinite horizon Markov decision process (MDP). We derive the equivalent Bellman equation and design the online value iteration algorithm via stochastic approximation for transmission. Due to the lack of the expression between the system performance and the caching policy, we design a gradient-free stochastic optimization algorithm to update the caching policy. Finally, simulation results show that our proposed algorithm achieves better performance than conventional caching algorithms.
In multi-hop cognitive radio networks (CRNs), channel switching and rerouting for a secondary user's (SU's) traffic flow protect the transmission of the primary users (PUs); however, due to the extra overhead and delay, they deteriorate the quality of the SU's transmission. In this paper, aiming to achieve a stable end-to-end delivery for a secondary flow, we study the problem of joint routing and channel assignment (JRCA), by exploiting the concept of expected switching interval (ESI). We first formulate the JRCA problem in a general setting without any constraint on the primary activity. Then, assuming Poisson arrival traffic for the PUs, we investigate the performance of the proposed scheme. Simulation results demonstrate that the proposed scheme enables a secondary flow on average to maintain a long end-to-end transmission time and thus reduces the risk of the flow being interrupted by the appearance of the PUs.
Heterogeneous backhaul deployment using different wired and wireless technologies is a potential solution to meet the demand in small-cell and ultra-dense networks. Therefore, it is of cardinal importance to evaluate and compare the performance characteristics of various backhaul technologies in order to understand their effect on the network aggregate performance and provide guidelines for system design. In this chapter, we propose relevant backhaul models and study the delay performance of various backhaul technologies with different capabilities and characteristics, including fibre, xDSL, millimetre-wave (mm-wave) and sub-6 GHz. Using these models, we aim to optimize the base station (BS) association so as to minimize the mean network packet delay in a macro-cell network overlaid with small cells. Furthermore, we model and analyse the backhaul deployment cost and show that there exists an optimal gateway density that minimizes the mean backhaul cost per small-cell BS. Numerical results are presented to show the delay performance characteristics of different backhaul solutions. Comparisons between the proposed and traditional BS association policies show the significant effect of backhaul on network performance, which demonstrates the importance of joint system design and optimization for radio access and backhaul networks.
As an emerging networking technology, LTE networks in unlicensed spectrum (named LTE-U networks) have drawn increasing attention. Due to the strict coexistence requirements from the Wi-Fi networks, quality-of-service (QoS) provisioning for delay- sensitive traffic over LTE-U networks is challenging. In this paper, assuming the LTE-U networks adopting a load based equipment (LBE) channel access mechanism supporting listen-before-talk (LBT) function, we propose a discrete-time Markov chain based approach to analyze the capacity of the LTE-U network serving delay-sensitive traffic. Specifically, we derive the maximum number of homogeneous traffic flows each with the same packet-level QoS constraints on a maximum tolerable delay and a delay violation probability. It is shown that the analytical results closely match with the simulation results, and both Wi-Fi traffic load and the QoS requirements of delay-sensitive traffic have a significant impact on the capacity of the LTE-U network.
To facilitate the private deployment of industrial Internet-of-Things (IoT), applying LTE in unlicensed spectrum (LTE-U) is a promising approach, which both tackles the problem of lacking licensed spectrum and leverages an LTE protocol to meet stringent quality-of- service (QoS) requirements via centralized control. In this paper, we investigate the computing offloading problem in an LTE-U-enabled network, where the task on an IoT device is carried out either locally or is offloaded to the LTE-U base station (BS). The offloading policy is formulated as an optimization problem to maximize the long term discounted reward, considering both task completion profit and the task completion delay. Due to the stochastic task arrival process at each device and the Wi-Fi's contention-based random access, we reformulate the computing offloading problem into a Q-learning problem and solve it by a deep learning network-based approximation method. Simulation results show that the proposed scheme considerably enhances the system performance.
Recently, live video is getting more and more attention due to its tremendous growth in mobile traffic data. We focus on the scenario that large-scale users gather under a base station watching several different video sources' live video contents. And users can change their receiving rates freely which can be divided into different levels. Therefore, we study a cooperative strategy enabled with non-orthogonal multiple access (NOMA) to make all users share the common time, code, and frequency resource. Specifically, we consider a scheme which can be divided into two phases, where the base station (BS) broadcasts a mixed message to all users in the broadcast transmission (BT) phase, and in the user-aided relaying (UR) phase, a user is selected as the helper to forward a mixed message including the information needed by users who fail to decode the message in the BT phase. Two methods are utilized for setting power allocation coefficients during the two phases. The first method is fixed power allocation (FPA) method in which the power allocation coefficients are fixed. While the second method is dynamic power allocation (DPA) method in which the power allocation coefficients is dynamically determined according to instantaneous channel state information (CSI). Moreover, two best helper selection approaches are proposed corresponding to the two power allocation methods. Simulation results show the advantage of the proposed cooperative strategy in terms of the system outage probability.