The paper examines the problem of fair bandwidth allocation in heterogeneous storage systems in the framework of multi-resource allocation. We first extend the Bottleneck Aware Allocation model recently proposed by the authors to directly compute the maximum allocation satisfyinglocal fairness, envy freedom and sharing incentive. Next, we broaden the solution space to all allocations that satisfy envy freedom and sharing incentive even if they do not satisfy local fairness. We present an efficient algorithm to maximize the system utilization in the more general model.
The current situation of global warming requires immediate attention due to the excessive accumulation of CO2 in the atmosphere. Given that the hydrate generation process can trap CO2 molecules in the lattice of water molecules, it could be a promising strategy to store CO2 in the ocean as hydrates. In this paper, molecular dynamics (MD) simulations are used to investigate the behavior of CO2 hydrate growth in salt-containing electrolyte solutions, and the mechanism of the effect of thermodynamic conditions and salt ions on hydrate growth is explored. The simulation results show that a proper subcooling can promote the hydrate growth process, yet the fluctuation of the hydrate growth rate is still significant in the presence of salt ions. Higher temperatures and salt ions both inhibit the formation of hydrogen bonds between water molecules, reducing the possibility of cage structure formation. Notably, the saline environments enhanced the competition for water molecules between hydrate cages and salt ions, increasing the proportion of empty cages. The selectivity of CO2 molecules to enter different cages is influenced by both the temperature and salt ions; higher temperatures and the presence of salt ions make it more difficult for CO2 molecules to enter the smaller 512 cages. The results of the study provide insights into the mechanism of CO2 hydrate generation in seawater at the microscopic level, guiding the future realization of CO2 sequestration technology via forming hydrate in marine environment.
In this era of the Internet of Everything (IoE), edge computing has emerged as the critical enabling technology to solve a series of issues caused by an increasing amount of interconnected devices and large-scale data transmission. However, the deficiencies of edge computing paradigm are gradually being magnified in the context of IoE, especially in terms of service migration, security and privacy preservation, and deployment issues of edge node. These issues can not be well addressed by conventional approaches. Thanks to the rapid development of upcoming technologies, such as artificial intelligence (AI), blockchain, and microservices, novel and more effective solutions have emerged and been applied to solve existing challenges. In addition, edge computing can be deeply integrated with technologies in other domains (e.g., AI, blockchain, 6G, and digital twin) through interdisciplinary intersection and practice, releasing the potential for mutual benefit. These promising integrations need to be further explored and researched. In addition, edge computing provides strong support in applications scenarios, such as remote working, new physical retail industries, and digital advertising, which has greatly changed the way we live, work, and study. In this article, we present an up-to-date survey of the edge computing research. In addition to introducing the definition, model, and characteristics of edge computing, we discuss a set of key issues in edge computing and novel solutions supported by emerging technologies in IoE era. Furthermore, we explore the potential and promising trends from the perspective of technology integration. Finally, new application scenarios and the final form of edge computing are discussed.
Attribute graph anomaly detection aims to identify nodes that significantly deviate from the majority of normal nodes, and has received increasing attention due to the ubiquity and complexity of graph-structured data in various real-world scenarios. However, current mainstream anomaly detection methods are primarily designed for centralized settings, which may pose privacy leakage risks in certain sensitive situations. Although federated graph learning offers a promising solution by enabling collaborative model training in distributed systems while preserving data privacy, a practical challenge arises as each client typically possesses a limited amount of graph data. Consequently, naively applying federated graph learning directly to anomaly detection tasks in distributed environments may lead to suboptimal performance results. We propose a federated graph anomaly detection framework via contrastive self-supervised learning (CSSL) federated CSSL anomaly detection framework (FedCAD) to address these challenges. FedCAD updates anomaly node information between clients via federated learning (FL) interactions. First, FedCAD uses pseudo-label discovery to determine the anomaly node of the client preliminarily. Second, FedCAD employs a local anomaly neighbor embedding aggregation strategy. This strategy enables the current client to aggregate the neighbor embeddings of anomaly nodes from other clients, thereby amplifying the distinction between anomaly nodes and their neighbor nodes. Doing so effectively sharpens the contrast between positive and negative instance pairs within contrastive learning, thus enhancing the efficacy and precision of anomaly detection through such a learning paradigm. Finally, the efficiency of FedCAD is demonstrated by experimental results on four real graph datasets.
Mobile network operators (MNOs) allocate computing and caching resources for mobile users by deploying a central control system. Existing studies mainly use programming and heuristic methods to solve the resource allocation problem, which ignores the energy cost problem that is really significant to the MNO. To solve this problem, in this article, we design a joint computing and caching framework by integrating deep deterministic policy gradient (DDPG) algorithm. Especially, we focus on the Internet of Vehicles scenario, which needs the support of mobile network provided by MNO. We first formulate an optimization problem to minimize MNO’s energy cost by considering the computation and caching energy costs jointly. Then, we turn the formulated problem into a reinforcement learning problem and utilize DDPG methods to solve this problem. The final simulation result shows that our solution can reduce energy costs by more than 15%, while ensuring the tasks can be completed on time.