Hosting capacity of distributed generation based on holomorphic embedding method in distribution networks
1
Citation
23
Reference
10
Related Paper
Citation Trend
Abstract:
Considering the voltage rise problem caused by integrating large-scale distributed generation into the distribution networks, a distributed generation hosting capacity assessment method based on the improved holomorphic embedding method is proposed. First, the relationship between distributed generator penetration and voltage at the access point is explored and voltage violation is used as a constraint to solve the hosting capacity. Secondly, a self-defined directional holomorphic embedding method is proposed based on the classical model, further, the safety region under voltage constraints is derived. The intersection of the bus trajectory with the boundary of the voltage constraint region is used as the criterion for judging the maximum hosting capacity of distributed generation under a single access scenario. Then, a sufficient number of distributed generation access scenarios are generated using Monte Carlo, and the proposed criterion is used to solve the hosting capacity under each scenario. The cumulative distribution curve is obtained by statistically solving admission capacity data, which can represent the relationship between the level of voltage violation risk and the hosting capacity of distributed generation. The validity and correctness of the proposed method are verified on the IEEE 22-bus distribution network.Cite
Citations (2)
Relative correctness is the property of a program to be more-correct than another with respect to a specification, whereas traditional (absolute) correctness distinguishes between two classes of candidate programs with respect to a specification (correct and incorrect), relative correctness defines a partial ordering between candidate programs, whose maximal elements are the (absolutely) correct programs. In this paper we argue that relative correctness ought to be an integral part of the study of program repair, as it plays for program repair the role that absolute correctness plays for program construction: in the same way that absolute correctness is the criterion by which we judge the process of deriving a program P from a specification R, we argue that relative correctness ought to be the criterion by which we judge the process of repairing a program P to produce a program P' that is more-correct than P with respect to R. In this paper we build on this premise to design a generic program repair algorithm, which proceeds by successive increases of relative correctness until we achieve absolute correctness. We further argue that in the same way that correctness ideas were used, a few decades ago, as a basis for correct-by-design programming, relative correctness ideas may be used, in time, as a basis for more-correct-by-design program repair.
Basis (linear algebra)
Cite
Citations (6)
We studied students' conceptions of correctness and their influence on students' correctness-related practices by examining how 159 students had analyzed the correctness of error-free and erroneous algorithms and by interviewing seven students regarding their work. We found that students conceptualized program correctness as the sum of the correctness of its constituent operations and, therefore, they rarely considered programs as incorrect. Instead, as long as they had any operations written correctly students considered the program 'partially correct'. We suggest that this conception is a faulty extension of the concept of a program's grade, which is usually calculated as the sum of points awarded for separate aspects of a program. Thus school (unintentionally) nurtures students' misconception of correctness. This misconception is aligned with students' tendency to employ a line by line verification method – examining whether each operation is translated as a sub-requirement of the algorithm – which is inconsistent with the method of testing that they formally studied.
Interview
Line (geometry)
Political correctness
Cite
Citations (27)
It has been argued in relation to Old Babylonian mathematical procedure texts that their validity or correctness is self-evident. One “sees” that the procedure is correct without it having, or being accompanied by, any explicit arguments for the correctness of the procedure. Even when agreeing with this view, one might still ask about how is the correctness of a procedure articulated? In this work, we present an articulation of the correctness of ancient Egyptian and Old Babylonian mathematical procedure texts – mathematical texts presenting the solution of problems. We endeavor to make explicit and explain how and why the procedures are reliable over and above the fact that their correctness is intuitive.
Political correctness
Articulation (sociology)
Cite
Citations (1)
The main objects of study in this chapter are holomorphic functions h: U→ V, with U and V open in ℂ, that are one-to-one and onto. Such a holomorphic function is called a conformal (or biholomorphic) mapping. The fact that h is supposed to be one-to-one implies that h' is nowhere zero on U [remember that if h' vanishes to order k ≥ 0 at a point P ∈ U, then h is (k+1)-to-1 in a small neighborhood of P—see §§5.2.1]. As a result, h-1: V →U is also holomorphic—as we discussed in §§5.2.1. A conformal map h: U → V from one open set to another can be used to transfer holomorphic functions on U to V and vice versa: that is, f : V → ℂ is holomorphic if and only if f o h is holomorphic on U; and g : U → ℂ is holomorphic if and only if g o h -1 is holomorphic on V.
Cite
Citations (0)
Continuation
Cite
Citations (0)
Existing computational solutions for stepwise correctness checking of free-response solution schemes consisting of equations only consider providing qualitative feedbacks. Hence, this research intends to propose a computational model of a typical stepwise correctness checking of a scheme of student-constructed responses normally (usually) performed by a human examiner with the provision of quantitative feedbacks. The responses are worked solutions on solving linear algebraic equations in one variable. The proposed computational model comprises of computational techniques of key marking processes, and has enabled a marking engine prototype, which has been developed based on the model, to perform stepwise correctness checking and scoring of the response of each step in a working scheme and of the working scheme as well. The assigned numeric score of each step, or analytical score, serves as a quantitative feedback to inform students on the degree of correctness of the response of a particular step. The numeric score of the working scheme, or overall score, indicates the degree of correctness of the whole working scheme. Existing computational solutions that are currently available determine response correctness based on mathematical equivalence of expressions. In this research, the degree of correctness of an equation is based on the structural identicalness of the constituting mathtokens, which is evaluated using a correctness measure formulated in this research. The experimental verification shows that the evaluation of correctness by the correctness measure is comparable to human judgment on correctness. The computational model is formalized mathematically by basic concepts from Multiset Theory, while the process framework is supported by basic techniques and processes problem of this research. The data used are existing worked solutions on solving linear algebraic equations in one variable from a previous pilot study as well as new sets of test of correctness shows that the computational model is able to generate the expected output. Hence, the underlying computational techniques of the model can be regarded as correct. The agreement between the automated and the manual marking methods were analysed in terms of the agreement between the correctness scores. The method agreement analyses were conducted in two phases. The analysis in Phase I involved a total of 561 working schemes which comprised of 2021 responses and in Phase II a total of 350 working schemes comprising of 1385 responses were used. The analyses involved determining the percent agreement, degree of correlation and degree of agreement between the automated and manual scores. The accuracy of the scores was determined by calculating the average absolute errors present in the automated scores, which are calibrated by the average mixed errors. The results show that both the automated analytical scores and the automated overall scores exhibited high percent agreement, high correlation, high degree of agreement and small average absolute and mixed errors. It can be inferred that the automated scores are comparable with manual scores and that the stepwise correctness checking and scoring technique of this research agrees with the human marking technique. Therefore, the computational model of stepwise quantitative assessment is a valid and reliable model to be used in place of a human examiner to check and score responses to similar questions used in this research for both formative and summative assessment settings.
Cite
Citations (0)
Relative correctness is the property of a program to be more-correct than another with respect to a given specification. Whereas the traditional definition of (absolute) correctness divides candidate program into two classes (correct, and incorrect), relative correctness arranges candidate programs on the richer structure of a partial ordering. In other venues we discuss the impact of relative correctness on program derivation, and on program verification. In this paper, we discuss the impact of relative correctness on program testing; specifically, we argue that when we remove a fault from a program, we ought to test the new program for relative correctness over the old program, rather than for absolute correctness. We present analytical arguments to support our position, as well as an empirical argument in the form of a small program whose faults are removed in a stepwise manner as its relative correctness rises with each fault removal until we obtain a correct program.
Argument (complex analysis)
Cite
Citations (3)
Coverage-based fault localization is a spectrum-based technique that identifies the executing program elements that correlate with failure. However, the effectiveness of coverage-based fault localization suffers from the effect of coincidental correctness which occurs when a fault is executed but no failure is detected. Coincidental correctness is prevalent and proved as a safety reducing factor for the coverage-based fault location techniques. In this paper, we propose a new fault-localization approach based on the coincidental correctness probability. We estimate the probability that coincidental correctness happens for each program execution using dynamic data-flow analysis and control-flow analysis. To evaluate our approach, we use safety and precision as evaluation metrics. Our experiment involved 62 seeded versions of C programs from SIR. We discuss the comparison results with Tarantula and two improved CBFL techniques cleansing test suites from coincidental correctness. The results show that our approach can improve the safety and precision of the fault-localization technique to a certain degree.
Control flow
Cite
Citations (5)