Recent years have witnessed the rapid growth of crowdsourced multimedia services, such as text-based Twitter, image-based Flickr, and video streaming-based Twitch and YouTube live events. Empowered by today’s rich tools for multimedia generation/distribution, as well as the growing prevalence of high-speed network and smart devices, most of the multimedia contents are crowdsourced from amateur users, rather than from commercial and professional content providers, and can be easily accessed by other users in a timely manner. Since cloud computing offers reliable, elastic and cost-effective resource allocation, it has been adopted by many multimedia service providers as the underlying infrastructure. In this thesis, we formulate the cloud resource allocation in crowdsourced multimedia services as a standard network utility maximization (NUM) problem with coupled constraints, in which real-time user interaction is a fundamental issue, and develop distributed solutions based on dual composition. We further propose practical improvements for the content generation and big data processing of crowdsourced multimedia services in a cloud environment. Crowdsourced multimedia services also rely on convenient mobile Internet access, since mobile users occupy a large portion of both content generators and content consumers. The rich multimedia content, especially images and videos, put significant pressure on the infrastructure of state-of-the-art cellular networks. Device-to-device (D2D) communication that smartly explores local wireless resources has been suggested as a complement of great potential to support proximity-based applications. In this thesis, we jointly consider resource allocation and power control with heterogeneous quality of service (QoS) requirements from diverse multimedia applications.
With the prevalence of broadband network and wireless mobile network accesses, distributed interactive applications (DIAs) such as online gaming have attracted a vast number of users over the Internet. The deployment of these systems, however, comes with peculiar hardware/software requirements on the user consoles. Recently, such industrial pioneers as Gaikai, Onlive, and Ciinow have offered a new generation of cloud-based DIAs (CDIAs), which shifts the necessary computing loads to cloud platforms and largely relieves the pressure on individual user's consoles. In this paper, we aim to understand the existing CDIA framework and highlight its design challenges. Our measurement reveals the inside structures as well as the operations of real CDIA systems and identifies the critical role of cloud proxies. While its design makes effective use of cloud resources to mitigate client's workloads, it may also significantly increase the interaction latency among clients if not carefully handled. Besides the extra network latency caused by the cloud proxy involvement, we find that computation-intensive tasks (e.g., game video encoding) and bandwidth-intensive tasks (e.g., streaming the game screens to clients) together create a severe bottleneck in CDIA. Our experiment indicates that when the cloud proxies are virtual machines (VMs) in the cloud, the computation-intensive and bandwidth-intensive tasks may seriously interfere with each other. We accordingly capture this feature in our model and present an interference-aware solution. This solution not only smartly allocates workloads but also dynamically assigns capacities across VMs based on their arrival/departure patterns.
Edge computing technology can greatly reduce the network load and the user response delay, which can effectively make up the defect of the cloud computing. However, edge storage nodes have much smaller space than cloud computing, therefore it is necessary to select an appropriate cache replacement policy to replace data that some users do not frequently request in order to cache the newly accessed data. This paper focuses on the problem of data replacement and the corresponding replacement strategy when the edge storage node lacks buffer space. Maximum file utility of system (Max-FUS) caching replacement algorithm are proposed to solve this problem. Finally, the edge computing system model, which is established in order to verify the feasibility of the algorithm by MATLAB simulation experiment, shows the Max-FUS can achieve better performance than the existing methods.
Recent years have witnessed the booming popularity of CLS platforms, through which numerous amateur broadcasters live stream their video contents to viewers around the world. The heterogeneous qualities and formats of the source streams, however, require massive computational resources to transcode them into multiple industrial standard quality versions to serve viewers with distinct configurations, and the delays to the viewers of different locations should be well synchronized to support community interactions. This article attempts to address these challenges and to explore the opportunities with new generation computation paradigms, in particular, fog computing. We present a novel fog-based transcoding framework for CLS platforms to offload the transcoding workload to the network edge (i.e., the massive number of viewers). We evaluate our design through our PlanetLab-based experiment and real-world viewer transcoding experiment.
Estimating a channel that is subject to frequency selective Rayleigh fading is a challenging problem in an orthogonal frequency division multiplexing (OFDM) system. We propose an enhanced channel estimation algorithm that combines the EM-based algorithms proposed previously and a least squares polynomial fitting (LSPF) approach. The combined algorithm can efficiently estimate the channel response of an OFDM system operating in an environment with multipath fading and additive white Gaussian noise (AWGN). The algorithm can improve the channel estimate obtained from the EM-based algorithms by polynomial fitting. Simulation results show that the bit error rate (BER) as well as the mean square error (MSE) of the channel can be improved by the algorithm. In particular, with these additional computations and demodulation delay, the MSE can be made smaller than the Cramer-Rao lower bound (CRLB).
The combination of multiple-antenna and orthogonal frequency division multiplexing (OFDM) provides reliable communications over frequency selective fading channels. We investigate this approach and focus on the application of space-time block codes (STBC) and space-frequency block codes (SFBC) in OFDM systems. We compare the performance of maximum likelihood (ML), zero forcing (ZF) and conventional detection algorithms. We show that ZF provides a good trade-off between computational complexity and performance. The problem of channel estimation in STBC-OFDM and SFBC-OFDM systems is also studied, including the derivation of the Cramer-Rao lower bound (CRLB). Since knowledge of the channel is required to coherently decode STBC-OFDM and SFBC-OFDM, we propose an iterative channel estimation algorithm based on the EM algorithm that requires very few pilot symbols. The CRLB can be achieved by the channel estimation algorithm.
Sonar radar captures visual representations of underwater objects and structures using sound wave reflections, making it essential for exploration, mapping, and continuous surveillance in wild ecosystems. Real-time analysis of sonar data is crucial for time-sensitive applications, including environmental anomaly detection and in-season fishery management, where rapid decision-making is needed. However, the lack of both relevant datasets and pre-trained DNN models, coupled with resource limitations in wild environments, hinders the effective deployment and continuous operation of live sonar analytics. We present SALINA, a sustainable live sonar analytics system designed to address these challenges. SALINA enables real-time processing of acoustic sonar data with spatial and temporal adaptations, and features energy-efficient operation through a robust energy management module. Deployed for six months at two inland rivers in British Columbia, Canada, SALINA provided continuous 24/7 underwater monitoring, supporting fishery stewardship and wildlife restoration efforts. Through extensive real-world testing, SALINA demonstrated an up to 9.5% improvement in average precision and a 10.1% increase in tracking metrics. The energy management module successfully handled extreme weather, preventing outages and reducing contingency costs. These results offer valuable insights for long-term deployment of acoustic data systems in the wild.
Network digital twin is a key enabler for the human-centric wireless metaverse, requiring fine-grained replication, high-fidelity screen rendering, and the integration of emerging intelligent technologies. However, existing frameworks for the network digital twin overlook the integration of human-centric features within the metaverse and the importance of asynchronous data collection, both of which are indispensable for actualizing the metaverse and attaining self-maintenance capabilities. To address this issue, in this article, we propose a human-centric framework and a self-maintained mechanism for the network digital twin, utilizing continuous prediction and error tolerance to enhance the performance of DT decision-making. Specifically, by considering the interplay among different components, we present a human-centric framework for the network digital twin, comprising the device twin layer, network twin layer, artificial intelligence service layer, and user intent layer. In consideration of asynchronous information, we propose a self-maintenance mechanism facilitated by two key capabilities: error tolerance and continuous prediction. Furthermore, we conduct a resource allocation experiment to validate the efficiency of the proposed framework and methods. The results demonstrate that the framework reduce latency by employing request prediction and robust optimization.
Estimating a channel that is subject to frequency selective Rayleigh fading is a challenging problem in an orthogonal frequency division multiplexing (OFDM) system. We propose an EM-based iterative algorithm to efficiently estimate the channel impulse response (CIR) of an OFDM system. This algorithm is capable of improving the channel estimate by making use of pilot tones to obtain the initial estimate for iterative steps. Simulation results show that the bit error rate (BER) can be significantly reduced by this algorithm.