Toward Optimal Hybrid Service Function Chain Embedding in Multiaccess Edge Computing
40
Citation
41
Reference
10
Related Paper
Citation Trend
Abstract:
The emerging multiaccess edge computing (MEC) architecture brings the needed computing resource to the network edge. Many 5G and Internet of Things (IoT) applications are latency sensitive and computation intensive in MEC systems. To flexibly provide and manage the network service requests in MEC systems, network function virtualization (NFV) can be employed to create a chain of service functions (SFs), namely, SF chain (SFC). Through SFC, the customer forwards user data to the edge server/cloud, and the edge server/cloud may return the processed results/models to the customer. When the forward and backward traffic is carrying different content, different SFs may be required for the forward and backward traffic, which requires a hybrid SFC (h-SFC). In this article, we study how to minimize the latency cost when embedding an h-SFC in MEC systems. We define a new problem called minimum latency hybrid SFC embedding (ML-HSFCE) and propose an algorithm, namely, optimal hybrid SFC embedding (Opt-HSFCE) to optimally embed a given h-SFC in MEC systems. Our extensive simulations and analysis show that the proposed Opt-HSFCE needs much less runtime compared with the brutal force algorithm and significantly outperforms the schemes that are directly extended from the existing techniques.Keywords:
Mobile Edge Computing
Vehicular Edge Computing (VEC) enables task offloading from vehicles to the edge servers deployed on Road Side Units (RSUs), thus enhancing the task processing performance of the vehicles. However, in a multi-RSU VEC scenario, the uneven geographical distribution of the vehicles naturally causes the load imbalance among the edge servers and leads to the overload and performance degradation problems of the edge servers in hot areas. To this end, in this paper, we propose a joint task offloading and resource allocation for VEC with edge-edge cooperation, in which the tasks offloaded to a high-load edge server can be further offloaded to the other low-load edge servers. Our objective is to minimize the total task processing delay of all the vehicles while guaranteeing the task processing delay tolerance and the holding time of each vehicle. An M/M/1 queue is used to model the task queuing and task computing processes on each RSU. An exact potential game is adopted to model the competition process for the task offloading among the RSUs. A two-stage iterative algorithm is designed to decompose the optimization problem into two stages and solve them iteratively. We analyze the computational complexity of the algorithm and conduct extensive simulations by varying different crucial parameters. The superiority of our scheme is demonstrated in comparison with 3 other reference schemes.
Mobile Edge Computing
Cite
Citations (32)
The Mobile Edge Computing provides an attractive platform to bring data processing closer to its source in a networked environment. The responsibility of the MEC layer is effectively handling the jobs offloaded by edge devices located in the edge layer, and the jobs are served by virtual machines on the MEC servers. In this paper, we propose a power-efficient jobs placement approach for Mobile Edge Computing, which aims to minimize the number of required active MEC servers and reduce power consumption. The experimental results show that the proposed algorithm can significantly reduce the power consumption.
Mobile Edge Computing
Edge device
Cite
Citations (3)
Mobile Edge Computing
Computation offloading
Cite
Citations (4)
In today's era of Internet of Things (IoT), where massive amounts of data are produced by IoT and other devices, edge computing has emerged as a prominent paradigm for low-latency data processing. However, applications may have diverse latency requirements: certain latency-sensitive processing operations may need to be performed at the edge, while delay-tolerant operations can be performed on the cloud, without occupying the potentially limited edge computing resources. To achieve that, we envision an environment where computing resources are distributed across edge and cloud offerings. In this paper, we present the design of CLEDGE (CLoud + EDGE), an information-centric hybrid cloud-edge framework, aiming to maximize the on-time completion of computational tasks offloaded by applications with diverse latency requirements. The design of CLEDGE is motivated by the networking challenges that mixed reality researchers face. Our evaluation demonstrates that CLEDGE can complete on-time more than 90% of offloaded tasks with modest overheads.
Edge device
Cite
Citations (0)
Mobile edge computing is widely believed to be a key technology in 5G networks. It promises dramatic reduction in latency and mobile energy consumption by offloading computation intensive and latency critical jobs to edge servers for execution. In this paper, we consider the job dispatching problem at the access proxies in a heterogeneous mobile edge computing system. The problem is modeled as a multi-armed bandit problem, and an online job dispatcher algorithm based on UCBI strategy is proposed. The performance of the proposed dispatcher and the edge servers are theoretically modeled and analyzed. Simulation experiments are conducted to evaluate the effectiveness of the proposed algorithm.
Mobile Edge Computing
Computation offloading
Cite
Citations (1)
Driven by great demands on low-latency services of the edge devices (EDs), mobile edge computing (MEC) has been proposed to enable the computing capacities at the edge of the radio access network. However, conventional MEC servers suffer disadvantages such as limited computing capacity, preventing the computation-intensive tasks to be processed in time. To relief this issue, we propose the heterogeneous MEC (HetMEC) where the data that cannot be timely processed at the edge are allowed be offloaded to the upper-layer MEC servers, and finally to the cloud center (CC) with more powerful computing capacity. We design the latency minimization algorithm by jointly coordinating the task assignment, computing and transmission resources among the EDs, multi-layer MEC servers, and the CC. Simulation results indicate that our proposed algorithm can achieve a lower latency and higher processing rate than the conventional MEC scheme.
Mobile Edge Computing
Edge device
Cite
Citations (0)
With the rapid upgrading and explosive growth of Internet of Things (IoT) devices in mobile-edge computing, more and more IoT applications with high resource requirements are developed and utilized. Meanwhile, there are large quantities of edge nodes (e.g., switches and edge servers) with limited resources, higher operating costs, and certain failure probabilities in the mobile-edge computing environment. Therefore, when an IoT application is split into multiple collaborative tasks and offloaded into multiple edge clouds, there is an urgent need to increase the availability level of the task allocation scheme and the resource utilization of edge servers under the condition of certain communication delay. In this article, we first present a joint optimization objective to evaluate the unavailability level, communication delay, and resource wastage while allocating the same batch of IoT applications to multiple edge clouds. We then propose an approach to minimize the joint optimization objective under the condition of certain communication delay. Finally, we performed a comprehensive simulation experiment analysis to demonstrate that our proposed approach is superior to other related approaches.
Mobile Edge Computing
Unavailability
Edge device
Resource Management
Cite
Citations (15)
Multi-access Edge Computing (MEC) provides cloud computing capabilities at the edge by offloading users' service requests on MEC servers deployed at Base Stations (BS). Optimising the resource allocation on such distributed units in a physical area such as a city, especially for compute-intensive and latency-critical services, is a key challenge. We propose a swarm-based approach for placing functions in the edge using a serverless architecture, which does not require services to pre-occupy the required computing resources. The approach uses a probabilistic model to decide where to place the functions while considering the resources available at each MEC server and the latency between the physical servers and the application requester. A central controller with a federated view of available MEC servers orchestrates functions' deployment and deals changes available resources. We compare our approach against the Best-Fit, Max-Fit, MultiOpt, ILP and Random baselines. Results show that our approach can reduce the latency of applications with limited effect on the resource utilisation.
Mobile Edge Computing
Cite
Citations (9)
In edge computing environments, edge cloud servers are deployed in networks in addition to centralized cloud servers to provide high quality services to users. The performance of the edge computing environments depend on where the edge cloud servers are located. This paper introduces some placement methods of edge cloud servers based on optimization problems, which take into account the network loads and the edge cloud server loads. Through numerical experiments, we compare these placement methods.
Cloud server
Cite
Citations (10)
In today's era of Internet of Things (IoT), where massive amounts of data are produced by IoT and other devices, edge computing has emerged as a prominent paradigm for low-latency data processing. However, applications may have diverse latency requirements: certain latency-sensitive processing operations may need to be performed at the edge, while delay-tolerant operations can be performed on the cloud, without occupying the potentially limited edge computing resources. To achieve that, we envision an environment where computing resources are distributed across edge and cloud offerings. In this paper, we present the design of CLEDGE (CLoud + EDGE), an information-centric hybrid cloud-edge framework, aiming to maximize the on-time completion of computational tasks offloaded by applications with diverse latency requirements. The design of CLEDGE is motivated by the networking challenges that mixed reality researchers face. Our evaluation demonstrates that CLEDGE can complete on-time more than 90% of offloaded tasks with modest overheads.
Edge device
Cite
Citations (0)