A Malleable Model Development for Critical Systems Supervision Integration
Luciano A. C. LisboaThamiles Rodrigues de MeloIkaro G. S. A. CamposMatheus B. AragãoAlexandre S. RibeiroLucas Cruz da SilvaValéria Loureiro da SilvaA.M.N. LimaAlex Álisson Bandeira Santos
0
Citation
17
Reference
10
Related Paper
Abstract:
The critical systems, which the failure and malfunction may result in human, environment and financial severe damages, are essential components in various sectors and particularly in energy domains. Although unwished, integration error problems in supervision of critical systems occurs, incurring significant expenses because of operator’s subjective analysis and on hardware topology failures. In this work, a malleable model design approach is proposed to formulate and solve the integration error problem in the critical systems supervision in terms of reliability. A real Hybrid Power Plant (HPP) case is considered as a requirement for a study case with simulated data. A method framework with informal approach (C4 diagram) and formal approach (Hierarchical Coloured Petri Nets) in radial spectrum is applied to the HPP supervision design. Using formal methods, a formulation and solution to this problem through structured, scalable and compact mathematical representations are possible. This malleable model intends to guarantee functional correctness and then reliability of the plant supervision system based on system software architecture. Outcomes suggest corroboration that the malleable model is appropriate for energy domain, and can be used for other types of critical systems, bringing all the benefits of this methodology to the context in which it will be applied.Keywords:
Reliability block diagram
In this paper we examine the capabilities and limitations of Petri nets and investigate techniques for proving their correctness. We define different classes of nets where each is basically a Petri net with slight modifications and study the relationship between the various classes. One particular class appears to be quite powerful, with respect to its capability for representing coordinations. In the second part of the paper we establish the feasibility of using the methods of computational induction and inductive assertions to prove restricted statements about Petri nets.
Process architecture
Stochastic Petri net
Cite
Citations (28)
Cite
Citations (2)
A mathematical basis for verifying computer programs has existed for some time but is used in practice only relatively infrequently. One of the reasons often cited for this lack of acceptance is that the proposed approaches fail to address certain questions of practical importance, e.g. whether a program even executes successfully or not. The commonly propounded proof rules do not deal with "run time" errors. While such considerations can be brought into a formal analysis of program correctness, previously proposed methods add extra steps and significant complexity to the process or are at least perceived by many to do so. Traditional approaches to program correctness distinguish between partial and total correctness. Very frequently the only difference between them is the consideration of whether loops terminate or not. This is not sufficient in practice, where the failure of even a simple assignment statement to execute must be considered. Three levels of correctness are required: partial correctness (as defined traditionally), semi-strictly total correctness and strictly total correctness. Semi-strictly total correctness guarantees that each individual statement in the program executes with a defined result (no "run time" errors) but permits loops (and recursive calls) to execute indefinitely. Strictly total correctness guarantees in addition that execution of the entire program terminates. The paper presents straightforward extensions to traditional proof rules for semi-strict and strict preconditions.
Statement (logic)
Cite
Citations (0)
Relative correctness is the property of a program to be more-correct than another with respect to a specification, whereas traditional (absolute) correctness distinguishes between two classes of candidate programs with respect to a specification (correct and incorrect), relative correctness defines a partial ordering between candidate programs, whose maximal elements are the (absolutely) correct programs. In this paper we argue that relative correctness ought to be an integral part of the study of program repair, as it plays for program repair the role that absolute correctness plays for program construction: in the same way that absolute correctness is the criterion by which we judge the process of deriving a program P from a specification R, we argue that relative correctness ought to be the criterion by which we judge the process of repairing a program P to produce a program P' that is more-correct than P with respect to R. In this paper we build on this premise to design a generic program repair algorithm, which proceeds by successive increases of relative correctness until we achieve absolute correctness. We further argue that in the same way that correctness ideas were used, a few decades ago, as a basis for correct-by-design programming, relative correctness ideas may be used, in time, as a basis for more-correct-by-design program repair.
Basis (linear algebra)
Cite
Citations (6)
We studied students' conceptions of correctness and their influence on students' correctness-related practices by examining how 159 students had analyzed the correctness of error-free and erroneous algorithms and by interviewing seven students regarding their work. We found that students conceptualized program correctness as the sum of the correctness of its constituent operations and, therefore, they rarely considered programs as incorrect. Instead, as long as they had any operations written correctly students considered the program 'partially correct'. We suggest that this conception is a faulty extension of the concept of a program's grade, which is usually calculated as the sum of points awarded for separate aspects of a program. Thus school (unintentionally) nurtures students' misconception of correctness. This misconception is aligned with students' tendency to employ a line by line verification method – examining whether each operation is translated as a sub-requirement of the algorithm – which is inconsistent with the method of testing that they formally studied.
Interview
Line (geometry)
Political correctness
Cite
Citations (27)
It has been argued in relation to Old Babylonian mathematical procedure texts that their validity or correctness is self-evident. One “sees” that the procedure is correct without it having, or being accompanied by, any explicit arguments for the correctness of the procedure. Even when agreeing with this view, one might still ask about how is the correctness of a procedure articulated? In this work, we present an articulation of the correctness of ancient Egyptian and Old Babylonian mathematical procedure texts – mathematical texts presenting the solution of problems. We endeavor to make explicit and explain how and why the procedures are reliable over and above the fact that their correctness is intuitive.
Political correctness
Articulation (sociology)
Cite
Citations (1)
Continuation
Cite
Citations (0)
Existing computational solutions for stepwise correctness checking of free-response solution schemes consisting of equations only consider providing qualitative feedbacks. Hence, this research intends to propose a computational model of a typical stepwise correctness checking of a scheme of student-constructed responses normally (usually) performed by a human examiner with the provision of quantitative feedbacks. The responses are worked solutions on solving linear algebraic equations in one variable. The proposed computational model comprises of computational techniques of key marking processes, and has enabled a marking engine prototype, which has been developed based on the model, to perform stepwise correctness checking and scoring of the response of each step in a working scheme and of the working scheme as well. The assigned numeric score of each step, or analytical score, serves as a quantitative feedback to inform students on the degree of correctness of the response of a particular step. The numeric score of the working scheme, or overall score, indicates the degree of correctness of the whole working scheme. Existing computational solutions that are currently available determine response correctness based on mathematical equivalence of expressions. In this research, the degree of correctness of an equation is based on the structural identicalness of the constituting mathtokens, which is evaluated using a correctness measure formulated in this research. The experimental verification shows that the evaluation of correctness by the correctness measure is comparable to human judgment on correctness. The computational model is formalized mathematically by basic concepts from Multiset Theory, while the process framework is supported by basic techniques and processes problem of this research. The data used are existing worked solutions on solving linear algebraic equations in one variable from a previous pilot study as well as new sets of test of correctness shows that the computational model is able to generate the expected output. Hence, the underlying computational techniques of the model can be regarded as correct. The agreement between the automated and the manual marking methods were analysed in terms of the agreement between the correctness scores. The method agreement analyses were conducted in two phases. The analysis in Phase I involved a total of 561 working schemes which comprised of 2021 responses and in Phase II a total of 350 working schemes comprising of 1385 responses were used. The analyses involved determining the percent agreement, degree of correlation and degree of agreement between the automated and manual scores. The accuracy of the scores was determined by calculating the average absolute errors present in the automated scores, which are calibrated by the average mixed errors. The results show that both the automated analytical scores and the automated overall scores exhibited high percent agreement, high correlation, high degree of agreement and small average absolute and mixed errors. It can be inferred that the automated scores are comparable with manual scores and that the stepwise correctness checking and scoring technique of this research agrees with the human marking technique. Therefore, the computational model of stepwise quantitative assessment is a valid and reliable model to be used in place of a human examiner to check and score responses to similar questions used in this research for both formative and summative assessment settings.
Cite
Citations (0)
Relative correctness is the property of a program to be more-correct than another with respect to a given specification. Whereas the traditional definition of (absolute) correctness divides candidate program into two classes (correct, and incorrect), relative correctness arranges candidate programs on the richer structure of a partial ordering. In other venues we discuss the impact of relative correctness on program derivation, and on program verification. In this paper, we discuss the impact of relative correctness on program testing; specifically, we argue that when we remove a fault from a program, we ought to test the new program for relative correctness over the old program, rather than for absolute correctness. We present analytical arguments to support our position, as well as an empirical argument in the form of a small program whose faults are removed in a stepwise manner as its relative correctness rises with each fault removal until we obtain a correct program.
Argument (complex analysis)
Cite
Citations (3)
Coverage-based fault localization is a spectrum-based technique that identifies the executing program elements that correlate with failure. However, the effectiveness of coverage-based fault localization suffers from the effect of coincidental correctness which occurs when a fault is executed but no failure is detected. Coincidental correctness is prevalent and proved as a safety reducing factor for the coverage-based fault location techniques. In this paper, we propose a new fault-localization approach based on the coincidental correctness probability. We estimate the probability that coincidental correctness happens for each program execution using dynamic data-flow analysis and control-flow analysis. To evaluate our approach, we use safety and precision as evaluation metrics. Our experiment involved 62 seeded versions of C programs from SIR. We discuss the comparison results with Tarantula and two improved CBFL techniques cleansing test suites from coincidental correctness. The results show that our approach can improve the safety and precision of the fault-localization technique to a certain degree.
Control flow
Cite
Citations (5)