logo
    Analysis of BWR OPRM plant data and detection algorithms with DSSPP
    0
    Citation
    0
    Reference
    13
    Related Paper
    Abstract:
    All U.S. BWRs are required to have licensed stability solutions that satisfy General Design Criteria (GDC) 10 and 12 of 10 CFR 50 Appendix A. Implemented solutions are either detect and suppress or preventive in nature. Detection and suppression of power oscillations is accomplished by specialized hardware and software such as the Oscillation Power Range Monitor (OPRM) utilized in Option III and Detect and Suppress Solution - Confirmation Density (DSS-CD) stability Long-Term Solutions (LTSs). The detection algorithms are designed to recognize a Thermal-Hydraulic Instability (THI) event and initiate control rod insertion before the power oscillations increase much higher above the noise level that may threaten the fuel integrity. Option III is the most widely used long-term stability solution in the US and has more than 200 reactor years of operational history. DSS-CD represents an evolutionary step from the stability LTS Option III and its licensed domain envelopes the Maximum Extended Load Line Limit Analysis Plus (MELLLA +) domain. In order to enhance the capability to investigate the sensitivity of key parameters of stability detection algorithms, GEH has developed a new engineering analysis code, namely DSSPP (Detect and Suppress Solution Post Processor), which is introduced in this paper. The DSSPP analysismore » tool represents a major advancement in the method for diagnosing the design of stability detection algorithms that enables designers to perform parametric studies of the key parameters relevant for THI events and to fine tune these system parameters such that a potential spurious scram might be avoided. Demonstrations of DSSPPs application are also presented in this paper utilizing actual plant THI data. A BWR/6 plant had a plant transient that included unplanned recirculation pump transfer from fast to slow speed resulting in about 100% to {approx}40% rated power decrease and about 99% to {approx}30% rated core flow decrease. As the feedwater temperature is reduced to equilibrium conditions, the power increased from about {approx}40% to about {approx}60% with little change inflow. A THI event developed and subsequently, an OPRM initiated scram occurred with Option III. (authors)« less
    Keywords:
    Scram
    Smaller manufacturing processes have resulted in higher power densities which put greater emphasis on packaging and temperature control during test. For system-on-chips, peak power-based scheduling algorithms are used to optimize tests while satisfying power budgets. However, imposing power constraints does not necessarily mean that overheating is avoided due to the non-uniform power distribution across the chip. This paper presents a TAM/Wrapper co-design methodology for system-on-chips that ensures thermal safety while still optimizing the test schedule. The method combines a simplified thermal-cost model with a traditional bin-packing algorithm to minimize test time while satisfying temperature constraints. Experiments show that even minimal increases in test time can yield considerable decrease in test temperature as well as the possibility of further lowering temperatures beyond those achieved using traditional power-based test scheduling.
    Overheating (electricity)
    Bin packing problem
    Citations (26)
    A hierarchical approach to capacity and energy loss evaluation in bulk power transmission systems is presented. The approach consists of two levels or hierarchies: a higher level, aimed at producing accurate but relatively slower active capacity and energy loss evaluations, and a lower level, which yields fast but relatively approximate evaluations. This structure allows the users to interactively control the level of accuracy of the calculation, and the relative speed by which it is performed. Results of testing the performance of an implementation of the approach for accuracy, convergence, and speed (using systems with up to 530 buses) are also presented.< >
    Power transmission
    Power loss
    Citations (32)
    High power consumption during test may lead to yield loss and premature aging. In particular, excessive peak power during at-speed delay fault testing represents an important issue. In the literature, several techniques have been proposed to reduce peak power consumption during at-speed LOC or LOS delay testing. On the other hand, some experiments have proved that too much test power reduction might lead to test escape and reliability problems. So, in order to avoid any yield loss and test escape due to power issues during test, test power has to map the power consumed during functional mode. In literature, some techniques have been proposed to apply test vectors that mimic functional operation from the switching activity point of view. The process consists of shifting-in a test vector (at low speed) and then applying several successive at-speed clock cycles before capturing the test response. In this paper, we propose a novel flow to determine the functional power to be used as test power (upper and lower) limits during at-speed delay testing. This flow is also used for comparison purpose between the above-mentioned test scheme and power consumption during the functional operation mode of a given circuit. The proposed methodology has been validated on an Intel MC8051 micro controller synthesized in a 65 nm industrial technology.
    Test compression
    Test vector
    Citations (24)
    Time domain-based online dynamic security assessment (DSA) systems require a fast and reliable method for simulation termination and margin calculation. B.C. Hydro's online DSA uses the second kick method which proved to be an elegant and reliable method for this purpose. This method was later enhanced to improve implementation and computation requirements. The methods presented so far require the mode of disturbance (MOD) information. In this paper, the fast second kick method is expanded to remove the need for the MOD data. The new method has been tested on the large scale systems of B.C. Hydro and Hydro Quebec and the results have been compared to those of the fast second kick method. The results obtained indicate that the new method predicts the stability limits accurately. They also indicate that the new method takes in average 20% longer than the fast second kick method to calculate the stability limit. The implementation requirement for both methods is similar.
    Margin (machine learning)
    Line (geometry)
    Citations (47)
    In this paper it is proposed a real-time stability index based on the wide-area measurement made by PMUs. The proposed index can identify inter-area stability problems based on two parameters mainly: the voltage magnitude change (reduction), and the phase angle movement (separation). The most considerable feature of this index is its fast and easy calculation from synchronously measured voltage without system modelling and simulation and without dependency on network size. The results show that the behavior of the index with respect to the change in system stability is suitable. Additionally, a basic control of reactive power based on the index is implemented to show its applicability. Results show that the simplicity and easy calculation make this index very suitable for online application because severe contingences can be detected at the right time as well as blackouts can be prevented in advance when a control action is performed.
    Citations (4)
    Abstract The desirable accuracy of nuclear data for minimizing the design margin in fast reactor nuclear designs is evaluated through an optimization method, in which a parameter termed the "degree of difficulty" has been introduced in order to represent the relative degree of difficulty expected to be encountered in improving the accuracy of the data. This parameter serves to determine the desirable accuracy, by which the design margin could be reduced the least onerously, to obviate uselessly severe requirements being imposed on the accuracy demanded of data with high sensitivity coefficients. Application of this method to the equilibrium core of a prototype fast reactor leads to the conclusion that nuclear data that should to best advantage be improved in accuracy are: Furthermore, it is indicated that the ratio between the desirable accuracy and the present uncertainty level of nuclear data is roughly inversely proportional to the square root of the product of the sensitivity coefficient and the present uncertainty level of nuclear data. KEYWORDS: fast reactorsnuclear designdesign marginnuclear dataoptimization methodsensitivity coefficientdesirable accuracyeffective multiplication factorbreeding ratioDoppler reactivitysodium void reactivity
    Nuclear data
    Margin (machine learning)
    Accuracy and precision
    Citations (0)
    The transient stability limits of power systems are determined by assuming conditions to be constant over short intervals of time and using the results obtained at the end of one interval to vary the conditions at the beginning of the next interval. The analysis is tedious and it is easy to make numerical errors. If a large number of machines is involved, it is essential to use an a.c. network analyser for determining the values of power to be used for the different time intervals, but there is no reduction in the computation labour, and in practice the greater part of the time taken for a study is devoted to the latter. To reduce the computation time and the possibility of numerical errors, the ?step? evaluations can be carried out semi-automatically by using the computer described. Furthermore, by reducing the time interval, greater accuracy can be obtained without an increase in overall time as compared with normal procedure.
    Transient (computer programming)
    Analyser
    Interval arithmetic
    Citations (0)
    The reliability analysis of the digital reactor protection system (RPS) is one of the essential parts in the probabilistic safety assessment (PSA) of the advanced boiling water reactor (ABWR). In this study, the reliability model and methodology were modified to evaluate the reliability of the digital RPS installed in the Japanese ABWR plant. The hardware failure rates in the foreign data source of digital components were applied, based on the similarity of the function of the digital components. The hardware failure rates of the digital components were estimated to range from 1.0E−5 (/hr) to 1.0E−7 (/hr), according to the types of the components. The software error events and their recovery factors in the design and fabrication stages were evaluated, considering the verification and validation process provided by the Japanese industry guideline on the digital reactor protection system. Then, the software failure probability of the programmable digital component was evaluated, utilizing the probability of software error events and their recovery factors. The software failure probability was estimated to be 3.3E−7 (/demand), which was about one order higher than that of our previous estimation. These models and results were applied to evaluate the reactor trip system (RTS) and the engineered safety feature (ESF) actuation system of the ABWR plant, both of which are the subsystems of the RPS. The unavailability of the digital RTS was estimated to be the mean value of 7.2E−06 (/demand). If both an alternate rod insertion (ARI) and a manual scram were considered, the unavailability was estimated to decrease to 1.6E−09. This value was nearly equal to the mean value of the previous study, 1.1E−09 (/demand), even though the quantification model and data were considerably modified, including the software failure probability. The system unavailability of the emergency core cooling system (ECCS) was also evaluated in conjunction with the ESF actuation system, in order to investigate the effect of the model and data modification. The ECCS unavailability was estimated to be also nearly equal to the same values as the previous estimation, because the system unavailability was dominated by the unavailability of the mechanical components, such as pumps, valves, etc. The sensitivity analyses were conducted systematically, in order to evaluate the effect of the modeling uncertainty on the digital RTS unavailability. The results indicated that the unavailability of the digital RTS only changed within the range of factor 2, even though the various assumptions were used on the hardware and the software failure of the digital components.
    Unavailability
    Scram
    Citations (3)
    The use of off-the-shelf components in microprocessor-based systems can limit the applicability of a number of hardware fault-tolerance methods. Software techniques offer attractive solutions to improve the reliability of systems operating in a hostile environment. The fault sensitivity of a system running a critical application obviously depends on the application execution time and the amount of memory it uses. This study shows that the program structure also has a significant influence on fault sensitivity. Program characteristics, such as the size and duration of iterative and sequential sections, are required to determine the sensitivity profile. It is shown that, provided data dependency is not affected one can rearrange the program structure to significantly reduce the average sensitivity of a program. Straightforward analysis of the sensitivity profile allows one to estimate the reduction. A simple example of code rearrangement is described and it is shown that a 50% reduction could be achieved with respect to the initial structure. The magnitude of the reduction varies from one application to another.
    Microprocessor
    Citations (8)
    Accuracy of operational data of a power plant is essential for power plant performance monitoring and fault diagnosis. However, due to inevitable occurrence of systematic and measurement errors in the course of obtaining operational data, these errors can only be reduced to a certain level but never be eliminated. In this work, we propose a data reconciliation based approach to reduce the errors of operational data thus enhance the accuracy of the data. The reconciled data can then be used in performance monitoring and fault diagnosis systems to improve their performances. The proposed method is based on more efficient use of redundant data and a first-principle mathematical model of a power plant. Then an optimization process is performed where the weighted least square form of aggregated differences between measured data and their estimated values are minimized. To illustrate the capability of the proposed method, we provide a case study of data reconciliation for feedwater heater heat balance analysis in a 660 MW coal- fired power plant in China. Results show that uncertainty of four key parameters, namely feed water mass flow rate, condensate mass flow rate, deaerator pressure and outlet temperature, can be reduced by 24 %, 30 %, 5 % and 65 %, whilst the uncertainty of other parameters are also reduced to various extent. Moreover, the results also indicate that the proposed approach is effective over a wide range of measured data quality, where quality of some data could be much worse than others and the estimated measurement uncertainties of operational data may not be accurate.
    Boiler feedwater
    Citations (6)