From a systematic perspective, Industrial Internet involves five kinds of industrial models, including object information model, mechanism model, data-driven model, mixed model and simulation model. Firstly, the contents of five kinds of industrial models and the relationships between them are introduced. Secondly, the existing modeling methods in academia and industry are summarized from different perspectives, such as object information model, mechanism model, data-driven model, mixed model and simulation model. Finally, the development trend of modeling technology in the field of Industrial Internet and making the future research prospects are summarized.
When natural language processing (NLP) technology is applied to industrial internet, problems such as lack of data and imbalance of data are often encountered. In order to improve the accuracy and robustness of the model, text data augmentation was proposed to expand data. Data augmentation is widely used in computer vision. For example, the semantics of the image will not be changed if the image is rotated several degrees or converted to gray level. However, augmentation of text data in NLP is pretty rare. Data augmentation is a low-cost means to expand the amount of data and improve the effect of the model, which has a wide range of applications.
Temperature monitoring and prediction play a crucial role in energy consumption control within data centers (DC). However, installing numerous temperature sensors in a large DC can incur significant costs and maintenance challenges. Moreover, due to environmental constraints and measurement technology limitations, missing or distorted data may arise in the measurement process of critical parameters. To address this issue, we utilize implicit neural representation to reconstruct the temperature distribution, thereby completing the missing and distorted temperature data. In this paper, we parameterize the complex temperature distribution as a continuous function, which implicitly represents the temperature within the DC as a mapping from the spatial coordinates and active power of the cabinet in the DC to the corresponding temperature. Our proposed methods demonstrate both quantitative and qualitative effectiveness in accurately completing the temperature information and rapidly reconstructing the temperature distribution. This approach offers a novel solution for addressing missing data and data distortion issues in DC temperature measurement.
The development of 5G, cloud computing, artificial intelligence (AI) and other new generation information technologies has promoted the rapid development of the data center (DC) industry, which directly increase severe energy consumption and carbon emissions problem. In addition to traditional engineering based methods, AI based technology has been widely used in existing data centers. However, the existing AI model training schemes are time-consuming and laborious. To tackle this issues, we propose an automated training and deployment platform for AI modes based on cloud-edge architecture, including the processes of data processing, data annotation, model training optimization, and model publishing. The proposed system can generate specific models based on the room environment and realize standardization and automation of model training, which is helpful for large-scale data center scenarios. The simulation and experimental results show that the proposed solution can reduce the time required of single model training by 76.2%, and multiple training tasks can run concurrently. Therefore, it can adapt to the large-scale energy-saving scenario and greatly improve the model iteration efficiency, which improves the energy-saving rate and help green energy conservation for data centers.
With the AI technology used in IDC energy saving, The whole room snapshot data are used in some scenarios, The room snapshot data is composed of the snapshot data of various devices in the room. These devices can only collect data regularly at a certain frequency due to settings, performance and other reasons, and can not achieve time synchronization and when used in AI algorithm the value from different collect time will cause inaccurate. Therefore, in the data processing and modeling phase, it is necessary to predict the device snapshot value at certain time, in practice , AI needs a large amount of training data to get good training results, and China Telecom has a large number of computer rooms that provide a large amount of training data. Generally, interpolation algorithms are used to unify data from multiple data sources, but data processing is only one step in the AI computing process. Traditional stand-alone interpolation algorithms cannot meet the time requirements. This paper proposes a Spark based distributed interpolation algorithm, Experiments show that this algorithm can reduce the running time of the algorithm in equal proportion by increasing resources.
This paper investigates a novel fuzzy Petri nets (FPNs) method based on q-rung orthopair fuzzy sets (q-ROFSs) to provide an efficient solution to uncertain knowledge representation and reasoning. It not only improves FPN’s flexibility in knowledge parameter representation and reasoning algorithms but also addresses the challenging problem that most FPNs cannot implement backward reasoning, which is a common reasoning task reversely inferring condition statuses according to consequences. Specifically, we first propose the q-rung orthopair FPNs (q-ROFPNs) by integrating q-ROFSs with FPNs. It achieves an intuitive evaluation of hesitancy information and a flexible adjustment of the knowledge representation ranges. And a reasoning algorithm based on the ordered weighted averaging-weighted average (OWAWA) operator is developed to accomplish the forward reasoning driven by q-ROFPNs, which can flexibly balance the proposition weights and its position weights. Building upon q-ROFPNs, we further propose the q-rung orthopair fuzzy reversed Petri nets (q-ROFRPNs) for backward reasoning task, in which a decomposition algorithm for q-ROFRPNs is designed for reducing the inference complexity, and an ordered weighted backward reasoning (OWBR) algorithm is provided to backward reasoning suitable for different fuzzy environments. In addition, to ensure the accuracy and rationality of reasoning results, we propose a knowledge acquisition method by power average (PA) operator to eliminate the negative impact of outlier on knowledge parameter assessments. A simulation experiment on the fault diagnosis of the air conditioning system demonstrates that the proposed method can achieve a more flexible and reliable knowledge representation and reasoning than the state-of-the-art FPNs methods.
Abstract Relational reasoning is ability to reason about entities and their interactions, which is not applicable for many deep neural networks. Recurrent relational networks, introduced by Palm et al. (2017), increase the complexity of the reasoning tasks they can address [1]. In this paper, we introduce Stacked Attention Recurrent Relational Networks (SARRN) to answer natural language questions from facts, which fundamentally hinge on multiple steps of relational reasoning and improve the ability of reasoning. Our model is a stacked attention model that use recurrent attention to focus on fine-grained parts of the documents. We apply our model to the BaBi tasks, which have a set of proxy tasks that evaluate reading comprehension via question answering. Our model solve 19/20 tasks and the experimental results on the test sets tasks show that our model yields great improvement. We also use qualitative analysis to show the result intuitively.