logo
    A Data Reconciliation Based Approach to Accuracy Enhancement of Operational Data in Power Plants
    6
    Citation
    7
    Reference
    20
    Related Paper
    Citation Trend
    Abstract:
    Accuracy of operational data of a power plant is essential for power plant performance monitoring and fault diagnosis. However, due to inevitable occurrence of systematic and measurement errors in the course of obtaining operational data, these errors can only be reduced to a certain level but never be eliminated. In this work, we propose a data reconciliation based approach to reduce the errors of operational data thus enhance the accuracy of the data. The reconciled data can then be used in performance monitoring and fault diagnosis systems to improve their performances. The proposed method is based on more efficient use of redundant data and a first-principle mathematical model of a power plant. Then an optimization process is performed where the weighted least square form of aggregated differences between measured data and their estimated values are minimized. To illustrate the capability of the proposed method, we provide a case study of data reconciliation for feedwater heater heat balance analysis in a 660 MW coal- fired power plant in China. Results show that uncertainty of four key parameters, namely feed water mass flow rate, condensate mass flow rate, deaerator pressure and outlet temperature, can be reduced by 24 %, 30 %, 5 % and 65 %, whilst the uncertainty of other parameters are also reduced to various extent. Moreover, the results also indicate that the proposed approach is effective over a wide range of measured data quality, where quality of some data could be much worse than others and the estimated measurement uncertainties of operational data may not be accurate.
    Keywords:
    Boiler feedwater
    With the rapid development of industry, every day there are huge amounts of industrial data in this process, there will be part of the data is duplicated, and it not only reduces the data quality in a certain extent, but also affects the enterprise to make a right decision, thereby reducing The productivity. In order to improve the quality of data, it's particularly important to clean the similar duplicate records. However, when SNM algorithm is used to detect similar records, we need to compare all the records in the window, and the time efficiency and accuracy are not high. Aiming at this defect, an improved dynamic fault-tolerant algorithm based on effective weight is proposed in this paper. Firstly, in the window according to the proportion of the length of two records will not be a duplicate record data excluded, reduce the times of comparing records, so as to improve the detection efficiency; Secondly, by setting the validity of the property factor, for the detection process Due to the miscarriages caused by missing attributes, a dynamic fault tolerance algorithm is proposed, which not only improves the efficiency of checking the weight but also ensures the accuracy of similar duplicate records detection. Finally, the experimental results show that, under the same experimental environment, the improved algorithm has obvious advantages both in terms of time efficiency and accuracy. Finally, the experimental results show that, in the same experimental environment, the improved algorithm has obvious advantages both in time efficiency and accuracy.
    Higher power has become one of primary obstacles for improving the performance of processors.To manage the power of processor dynamically, it is necessary to obtain the power timely and accurately.Power estimation using performance events is a better way in terms of timeliness and accuracy.We analyzed the correlation between performance events and powers of microprocessors.Only one performance event was selected for the power estimation of one functional unit based on the correlation analysis.The power of each functional unit was estimated based on simple regression analysis.In addition, the impact of ambient temperature on the accuracy for power estimation was presented.It can be seen from the experimental results that (1) the power of each functional unit has close linear correlation with a performance event and can be estimated accurately; (2) the ambient temperature has the negative impacts on the accuracy for power estimation.
    Alpha (finance)
    Citations (2)
    The well-known largest normalized residual (LNR) test for bad data identification becomes computationally inefficient for large-scale power systems containing a large volume of bad data, given the fact that it identifies and removes bad measurements sequentially, one at a time. In this paper, a highly efficient alternative implementation of the LNR test will be presented where the computational efficiency will be significantly improved. The main idea is based on the classification of suspect measurements into groups, which have negligible interaction. Then, the LNR test can be applied simultaneously to each individual group, allowing simultaneous identification of multiple bad data in different groups. Consequently, the number of identification/correction cycles for processing a large volume of bad data will be significantly reduced. Simulations carried out on a large utility system show drastic reductions in the CPU time for bad data processing while maintaining highly accurate results. This work is expected to facilitate implementation and more effective use of the LNR test for identifying and correcting measurement errors in very large power systems.
    Identification
    Citations (72)
    Performance acceptance test for gas-steam Combined Cycle Power Plant (CCPP) is of great significance for both equipment manufacturer and customer. The influence of measurement error on the calculation of guaranteed performance data as power output and heat rate can lead to unnecessary loss for either party. Commonly used uncertainty analysis method based on ASME PTC 19.1 would require all measuring instrumentation working at designed accuracy range. Meanwhile, due to the complexity of CCPP system and large number of measuring items, and as well the propagation of measurement and data reduction error, the uncertainty of corrected performance data could be significant. In this paper, process data reconciliation method based on VDI 2048 is introduced. With access to complete performance test data from a CCPP project, data reconciliation calculation is performed with an appropriate thermodynamic model. Several measurement values with gross error are identified and verified in heat balance calculation. Moreover, after recalculating with the reconciled data instead of raw data for the corrected power output and heat rate, comparison with the common uncertainty analysis method is also carried out. It is shown that with this reconciliation method, it is not only possible to find out gross errors such as instrumentation drift, but also able to dramatically increase the test result accuracy, which is of great value for both manufacturer and customer.
    Instrumentation
    Citations (0)
    Abstract Control systems are key elements of virtually all industrial processes, whose performance directly impacts aspects as important as: product quality and variability, operations safety, process efficiency/costs and environmental impact. In this paper we address the problem of monitoring the performance of such control systems, and in particular a new historical‐data benchmark index is proposed ( I M ), which is able to discern between perturbations in the system's core modules, which are under the supervision of process owners, from those originated at the level of disturbances, usually involving other stakeholders. It is a generalization of the current index ( I v ) as it can be shown that it reduces to this index for the particular case where the variability of the disturbances is the same as in the reference or benchmark period and in the monitoring period. The results obtained demonstrate that the proposed historical‐data benchmark index is able to maintain the target false alarm rate under situations where the variability of the disturbances increases, a situation where the current index, I v , fails. When the disturbances variability is maintained, both indices present similar detection capability, as expected. The subsequent identification of the modules in fault was also analysed, and the results show that the proposed methodology is able to identify the general source of the degradation in the controller performance, namely, if it is due to a perturbation within the system's core (and which loop is affected) or at the level of the disturbances (increasing variability of the loads or change in their dynamical behaviour). Copyright © 2010 John Wiley & Sons, Ltd.
    Benchmark (surveying)
    Process capability index
    Citations (6)