We investigate the following fundamental question—how fast can information be collected from a wireless sensor network organized as tree? To address this, we explore and evaluate a number of different techniques using realistic simulation models under the many-to-one communication paradigm known as convergecast. We first consider time scheduling on a single frequency channel with the aim of minimizing the number of time slots required (schedule length) to complete a convergecast. Next, we combine scheduling with transmission power control to mitigate the effects of interference, and show that while power control helps in reducing the schedule length under a single frequency, scheduling transmissions using multiple frequencies is more efficient. We give lower bounds on the schedule length when interference is completely eliminated, and propose algorithms that achieve these bounds. We also evaluate the performance of various channel assignment methods and find empirically that for moderate size networks of about 100 nodes, the use of multifrequency scheduling can suffice to eliminate most of the interference. Then, the data collection rate no longer remains limited by interference but by the topology of the routing tree. To this end, we construct degree-constrained spanning trees and capacitated minimal spanning trees, and show significant improvement in scheduling performance over different deployment densities. Lastly, we evaluate the impact of different interference and channel models on the schedule length.
Wireless sensor networks offer an attractive choice for low- cost and easy-to-deploy solutions for intelligent traffic guidance systems and parking lot applications. In this paper, we propose the use of a combination of magnetic and ultrasonic sensors for accurate and reliable detection of vehicles in a parking lot. We describe a modified version of the min-max algorithm for detection of vehicles using magnetometers, and also an algorithm for ultrasonic sensors. Through extensive real world experiments conducted in a multi-storied university parking space we compare the pros and cons of using different sensing modalities, and show that ultrasonic sensors along with magnetometers is an excellent choice for accurate vehicle detections. We demonstrate the efficacy of our proposed approach by conducting an elaborate car counting experiment lasting over a day, and show promising results using these two sensing modalities.
Multicast device-to-device (D2D) transmission is important for applications like local file transfer in commercial networks and is also a required feature in public safety networks. In this paper we propose a tractable baseline multicast D2D model, and use it to analyze important multicast metrics like the coverage probability, mean number of covered receivers and throughput. In addition, we examine how the multicast performance would be affected by certain factors like dynamics (due to e.g., mobility) and network assistance. Take the mean number of covered receivers as an example. We find that simple repetitive transmissions help but the gain quickly diminishes as the number of repetitions increases. Meanwhile, dynamics and network assistance (i.e., allowing the network to relay the multicast signals) can help cover more receivers. We also explore how to optimize multicasting, e.g. by choosing the optimal multicast rate and the optimal number of retransmission times.
LTE Rel-8 and WiMAX are the two main wireless broadband technologies based on OFDM which are currently being commercialized. Both of these technologies are being enhanced (LTE-Advanced and 802.16m) so as to support higher peak rates, higher throughput and coverage, and lower latencies, resulting in a better user experience. Further, both LTE-Advanced and 802.16m were approved by the ITU as IMT-Advanced technology. Also several operators are considering deploying both these technologies or migrating their existing WiMAX system to LTE or 802.16m. In this chapter, these two main broadband technologies are compared with respect to their features and system performance. Also, WiMAX and LTE co-existence and migration scenarios are briefly discussed.
LTE uplink provides an increase in capacity (both sector and cell-edge) by a factor of 2–3 compared with previous UMTS high-speed uplink packet access (HSUPA) systems at substantially less latency. This enables efficient support of high-rate data services such as FTP, HDTV broadcast, and HTTP as well as delay-sensitive services such as VoIP and video streaming. In LTE, several technological enhancements have been introduced in the uplink air interface to enable this improvement. They include orthogonal uplink transmission from intra-cell users, frequency-selective scheduling, shorter subframe size, support for 64-QAM modulation, multi-user spatial multiplexing, subframe bundling, semi-persistent scheduling, fractional power control, inter-cell interference control, and efficient control channels.
In this book, system-level performance results based on comprehensive system simulations of cellular networks are provided. An example of the cellular layout used for system simulation is shown in Figure A.1. This is a typical 19-site, 57-cell system using a hexagonal grid. In this case, a cell is viewed as a sector of the physical site. However, in LTE each cell is treated as an independent eNB. The spacing between each site and the next is dependent on the deployment scenario. For example, in urban micro-cell deployment, the inter-site distance is 200 m. Users are dropped randomly into the simulation space. For instance, in urban micro-cell deployment, 570 users are randomly dropped. After the users have been dropped, long-term radio characteristics such as pathloss and shadowing are calculated. Users are then assigned to the cell using the minimum pathloss as the cell-selection criterion. For the urban micro-cell example, on average, approximately 10 users will be associated with each cell.
In recent years a renewed interest has been shown in the possibility of using meteor burst links in tactical communications, both for networking and covert operations. Some of the applications that recent performance improvements would permit are evaluated. In evaluating the feasibility of a meteor burst implementation, certain technical and physical limitations are addressed. For the success of these applications, interoperability with other communication systems is necessary. The level of interoperability with other media, and the standards necessary to assure this interoperability are examined. Methods of minimizing and combating jamming are proposed. Meteor burst systems can be used in a large number of applications within a tactical environment. The principal disadvantage of the meteor burst medium is the problem of interference to other spectrum users from the probe end, and the interference from other users at the receiver end. The low throughput characteristic of meteor burst compares with some of the channel capacities used in other systems. Interoperability with other networks or communications links is relatively easy if certain straightforward protocols and standards are established.< >
Convergecast, namely the many-to-one flow of data from a set of sources to a common sink over a tree-based routing topology, is a fundamental communication primitive in wireless sensor networks. For real-time, mission-critical, and high data-rate applications, it is often critical to maximize the aggregated data collection rate (throughput) at the sink node, as well as minimize the time (delay) required for packets to get there. In this thesis, we look into the algorithmic aspects of jointly optimizing both throughput and delay for aggregated data collection in sensor networks. Our contributions are in designing efficient algorithms with provably good, worst-case performance bounds for arbitrarily deployed networks. To the best of our knowledge, we are the first ones to address these two mutually conflicting performance objectives – throughput and delay – under the same optimization framework and develop techniques to meet the stringent requirements for fast data collection.
Our approach in addressing the throughput-delay performance trade-off comprises three techniques: (i) multi-channel scheduling, (ii) routing over optimal topologies, and (iii) transmission power control. We exploit the benefits of multiple frequency channels to design efficient TDMA scheduling algorithms, both under the graph-based and the SINR-based interference models. In particular, by decoupling the joint frequency and time slot assignment problem into two separate subproblems of frequency assignment and time slot assignment, we show that our scheduling algorithms have constant factor and logarithmic approximation ratios on the optimal throughput for random geometric graphs as well as for SINR-based models.
In order to further enhance the data collection rate and bound the maximum delay, we study the degree-radius trade-off in spanning trees and propose algorithms under the bicriteria optimization framework. In particular, we construct bounded-degree-minimum-radius spanning trees that have constant factor approximations on the maximum node degree as well as the tree radius. We also show that our multi-channel scheduling algorithms perform much better on such trees in maximizing the aggregated throughput and minimizing the maximum delay, thus achieving the best of both worlds. Lastly, we design efficient, distributed power control schemes for sensor networks deployed in 3-D, where very high density of nodes causes high interference resulting in low network throughput. Our proposed algorithms have low computational overhead compared to the state-of-the-art, and by using local geometric information and tools from computational geometry produce sparse yet connected topologies in 3-D, thus reducing interference and allowing for high throughput.
In this chapter the details of LTE downlink transmission are discussed. The LTE downlink air interface uses the OFDM multiple-access technique described in Chapter 2. The use of OFDM transmission technology provides significant advantages over other radio transmission techniques. They include high spectral efficiency, support for broadband data transmission, the absence of intra-cell interference (i.e. multiple users in the same cell can share the same subframe without interfering with each other), resistance to inter-symbol interference arising from multipath operation, natural support for MIMO schemes, a low-complexity receiver, and support for frequency-domain techniques such as frequency-selective scheduling, a single-frequency network, and soft fractional frequency reuse. In addition to OFDM, LTE also utilizes several other features to enhance system performance and user experience. They include short frame size to minimize latency, a single-frequency network to provide high-data-rate broadcast services, VoIP support to increase voice capacity, coverage for very large cells, and coverage for high-speed users (up to 350 km/h) [1]–[2].
Integrated access and backhaul (IAB) is an important new feature in 5G NR that enables rapid and cost-effective millimeter-wave (mmWave) deployments through self-backhauling in the same spectrum. IAB deployments can achieve excellent cell edge coverage, for example, uplink rates above 100 Mb/s, while significantly reducing the amount of required fiber. This article provides a primer on IAB, contrasting it with the many failed multihop systems that preceded it. We conduct a large-scale study of coverage and rate performance based on a plausible deployment in Chicago's Lincoln Park neighborhood, utilizing ray tracing in the 39 GHz band. The study demonstrates that, as theory predicts, an IAB solution provides a massive coverage advantage for early mmWave rollouts with only a small number of fiber-connected (donor) base stations, for example, less than 10/km 2 . We show that as the UE and traffic loads increase over time as the 5G eco-system matures, the per user throughput can be maintained by replacing IAB (relay) nodes with donor nodes, that is, slowly extending the fiber network.