Data-driven diagnostic frameworks for large-scale power grid networks usually deal with a large number of features collected by means of sparse measuring devices. As a pre-processing task, dimensionality reduction methods can improve the efficiency of data-driven diagnostic methods by extracting sets of informative and relevant features from the raw data through appropriate transformations. This work is devoted to studying the applicability of various well-known dimensionality reduction techniques in combination with four classification models in diagnosing open circuit faults in smart grids. By providing a comparative study, this work aims at finding the best combination of dimensionality reduction techniques and classification models for diagnosing faults under normal, high signal-to-noise-ratio, low sampling rate, and high fault-resistance conditions. Various fault scenarios have been simulated on the IEEE 39-bus system and a rigorous analysis of the attained results is fulfilled so as to determine the best combinations under different conditions.
This article focuses on the design of a hierarchical framework for locating faults in smart grids by resorting to only modal components of three-phase voltage measurements. The search space for identifying the faulty lines is first limited to the impacted regions by the fault, which is determined through an improved graph analytic-based algorithm by contributing the system topology and attribute affinities. The faulty lines within the faulty regions are then identified by employing a heuristic index extracted from the wavelet multiresolution analysis of corresponding modal components. The fault location on the faulty lines is finally estimated by the regression analysis of two novel graph regularization-based learning models. This fault location proposal has been evaluated over numerous simulated scenarios on the IEEE 39-bus system with the measurements subject to sampling rate, fault resistance, and noise issues. The attained results validate the efficiency of the proposed framework.
Abstract In the face of escalating global climate concerns, the imperative to mitigate CO2 emissions has never been more pressing. A pivotal question that arises pertains to the responsible disposal of captured CO2. Deep saline aquifers have emerged as a promising solution, owing to their inherent attributes of high permeability and porosity, enabling efficient CO2 injection and long-term storage. Nevertheless, the successful implementation of CO2 reservoir injection presents multifaceted challenges, notably the need for an impermeable cap rock to prevent leakage while preserving reservoir permeability for injection ease. This study delves into the realm of data-driven decision-making, where the oil and gas industry is progressively harnessing the capabilities of Machine Learning (ML) and Deep Learning (DL) technologies. Specifically, we investigate the application of ML and DL techniques in monitoring CO2 saturation levels within saline aquifers, employing bottomhole pressure as the primary predictive parameter. A range of algorithms, including Random Forest (RF), Support Vector Regressor (SVR), Recurrent Neural Networks (RNN), and Long Short-Term Memory (LSTM), were rigorously tested to ascertain their efficacy in this endeavor. The training data for these models were meticulously generated using a well-known reservoir simulator. Our comprehensive investigation culminated in insightful findings. We present a detailed analysis of how emerging technologies, such as ML and DL, can be harnessed to accurately track CO2 saturation levels. The performance evaluation of the employed algorithms provides valuable insights into their proficiency for predicting CO2 saturation. These results offer a nuanced understanding of the potential applications of these technologies in the management of CO2 reservoirs, paving the way for more effective and sustainable carbon capture and storage solutions. This research underscores the integration of cutting-edge machine learning and deep learning technologies within the oil and gas sector to tackle the intricate challenges associated with CO2 disposal. Furthermore, it highlights the pivotal role of data-centric decision-making in the context of CO2 injection and storage, contributing significantly to the ongoing discourse on sustainable carbon capture and storage (CCS) solutions. In a world grappling with the urgent climate crisis, our study's novelty lies in its potential to drive forward more efficient and environmentally responsible CO2 management strategies.
This paper investigates the problem of robust unknown input observers design for fault detection of Takagi–Sugeno fuzzy systems. In order to handle uncertainties related to membership functions and rule-base, in this study interval type-2 fuzzy sets are employed as activation functions. The system is supposed to be affected by parameter uncertainties and time-varying delays, which makes the design procedure more challenging. Furthermore, to achieve better results in the detection of faults, a multi-objective optimization index is considered so as to get a residual signal with the most possible sensitivity to the fault and least one to other signals. This issue will lead to some design constraints in the terms of linear matrix inequalities. Two case studies are provided to show the validity of the proposed method. In addition, the superiority of interval type-2 fuzzy sets compared to type-1 sets is investigated in the simulation part.
This paper presents a novel diagnostic framework for distributed power systems that is based on using generative adversarial networks for generating artificial knockoffs in the power grid. The proposed framework makes use of the raw data measurements including voltage, frequency, and phase-angle that are collected from each bus in the cyber-physical power systems. The collected measurements are firstly fed into a feature selection module, where multiple state-of-the-art techniques have been used to extract the most informative features from the initial set of available features. The selected features are inputs to a knockoff generation module, where the generative adversarial networks are employed to generate the corresponding knockoffs of the selected features. The generated knockoffs are then fed into a classification module, in which two different classification models are used for the sake of fault diagnosis. Multiple experiments have been designed to investigate the effect of noise, fault resistance value, and sampling rate on the performance of the proposed framework. The effectiveness of the proposed framework is validated through a comprehensive study on the IEEE 118-bus system.
In this paper, an optimal interval Type-2 Fuzzy controller is designed for the speed control of DC motors. In this way, first, the importance and position of Type-2 fuzzy systems are mentioned. In addition, some properties of Type-2 operators are investigated as well as the properties of membership degree of Type-2 fuzzy sets. A comparison between different parts of Type-1 and Type-2 fuzzy systems, such as fuzzifier, fuzzy inference engine, rule-base and defuzzifier is given. Finally, an Interval type-2 Fuzzy logic controller is implemented for the speed control of DC motor for the cases of series and shunt. The motor is considered under both the load disturbances and disturbance free conditions. The obtained results for different conditions are compared in tables and figures. The results show that in the disturbance free case, both controllers have acceptable performance, however, when the system is affected by disturbance interval Type-2 controller has better performance.
Trust-building is of paramount importance for managing and improving consensus in group decision-making (GDM). This mechanism usually involves a trust propagation process for estimating the level of trust among decision-makers (DMs). However, this process is computationally expensive and hinders the speed of consensus reaching. To address this issue, this work proposes a novel trust-building mechanism that does not rely on the trust propagation process to quantify DMs' level of trust. Instead, it makes use of Blockchain technology to facilitate communication between the moderator and the group of DMs. This novel trust-building mechanism does not rely on trust propagation, which makes it computationally efficient for building trust among DMs while also providing a secure and efficient communication protocol to accelerate the consensus-reaching process. The proposed GDM model is illustrated through an example, and the sensitivity of the model to various assumptions is analyzed, demonstrating the practical applicability of this approach.
Abstract A precise understanding of the spatial distribution of rock mass properties is essential for the safe and economical design of rock structures. This paper adapts geostatistical methodologies, traditionally employed for estimating block ore grades and tonnage, to forecast rock properties crucial for structural modeling. The Rock Mass Rating (RMR) classification system, extensively utilized for evaluating rock mass quality, serves as a framework to inform excavation techniques and ensure slope stability in open-pit mining and rock support systems for tunnel construction. The study introduces a geostatistical simulation method to create three-dimensional (3D) models of rock mass quality distribution based on RMR. Geotechnical data from 37 drillholes, encompassing a total of 11,278 meters, were collected from the Miduk open pit mine in Iran. Two block models for RMR were constructed using the turning bands simulation method (TBM) with 100 realizations. The research utilized both direct and indirect approaches. In the direct method, the RMR value was considered a singular variable for simulation, whereas the indirect method involved simulating individual RMR parameters and subsequently summing them to derive the final RMR for each block. Cross-validation indicated strong consistency between the two approaches, reinforced by the 3D model of the faults and the contribution of joints, which were derived from scan-line mapping data collected from 24,160 surface stations. Although both methods yielded similar results, the block model developed via the indirect approach proved to be more comprehensive regarding geomechanical parameters and has thus been established as the final model.
Computational intelligence-based diagnostic frameworks have emerged as rapidly evolving but highly efficient approaches for diagnosing faults in power grids. This work aims to build a diagnostic framework by resorting to computational intelligence techniques to improve decision-making and diagnostic accuracy. This diagnostic framework has three modules for signal processing, fault detection, and location. The signal-processing module uses the variational mode decomposition technique to extract informative time-frequency features from the voltage and frequency signals. Voltage features are then fed into the fault detection module to train a set of modular support vector machines that are used for monitoring the binary state of each node in the power grid. Once a faulty state on a node is detected, it activates the third module for identifying fault location. This module benefits from a novel zSlices-based general type-2 fuzzy fusion model for the sake of identifying the fault type as well as mitigating the false alarm rate. The exact location of the fault is then determined through a fuzzy decision support system that is equipped with a recommendation mechanism for the sake of consensus reaching. Various scenarios are simulated on the IEEE 39-bus system and on an experimental setup of a Three-Bus Two-Line transmission system, where the attained results verify the applicability, efficiency, and robustness of the proposed framework.