logo
    Thermal Analysis of the Battery Box in Electric Vehicles
    0
    Citation
    4
    Reference
    10
    Related Paper
    Abstract:
    Based on the analysis of heat mechanism of the lithium-ion battery, the thermal properties on specific heat capacity and thermal conductivity were obtained by experiment , and then established thermal model of the monomer battery. Moreover, to verify the correctness of the model and get temperature field distribution, the heat simulation of it by using ANSYS Fluent software was conducted, After that, the monomer battery was grouped into five, then the simulation analysis of the grouped battery was done on the case of natural convection and forced convection. Finally, we verified the validity of the simulation by experiment.
    Theoretical and applied problems of the correctness at evaluation of the output in the System of the National Account (SNA) are examined in the study. It makes experiments to elaborate and to put in to practice different methods for improvement of the correctness at the evaluation of the output. First, it is examined the theoretical problems connected with pure branch, the units in the SNA, the inflation and the import output. Second, it is given different proposals for using these methods in the statistical practice as is pointed their priorities.
    Citations (0)
    Relative correctness is the property of a program to be more-correct than another with respect to a specification, whereas traditional (absolute) correctness distinguishes between two classes of candidate programs with respect to a specification (correct and incorrect), relative correctness defines a partial ordering between candidate programs, whose maximal elements are the (absolutely) correct programs. In this paper we argue that relative correctness ought to be an integral part of the study of program repair, as it plays for program repair the role that absolute correctness plays for program construction: in the same way that absolute correctness is the criterion by which we judge the process of deriving a program P from a specification R, we argue that relative correctness ought to be the criterion by which we judge the process of repairing a program P to produce a program P' that is more-correct than P with respect to R. In this paper we build on this premise to design a generic program repair algorithm, which proceeds by successive increases of relative correctness until we achieve absolute correctness. We further argue that in the same way that correctness ideas were used, a few decades ago, as a basis for correct-by-design programming, relative correctness ideas may be used, in time, as a basis for more-correct-by-design program repair.
    Basis (linear algebra)
    Citations (6)
    We studied students' conceptions of correctness and their influence on students' correctness-related practices by examining how 159 students had analyzed the correctness of error-free and erroneous algorithms and by interviewing seven students regarding their work. We found that students conceptualized program correctness as the sum of the correctness of its constituent operations and, therefore, they rarely considered programs as incorrect. Instead, as long as they had any operations written correctly students considered the program 'partially correct'. We suggest that this conception is a faulty extension of the concept of a program's grade, which is usually calculated as the sum of points awarded for separate aspects of a program. Thus school (unintentionally) nurtures students' misconception of correctness. This misconception is aligned with students' tendency to employ a line by line verification method – examining whether each operation is translated as a sub-requirement of the algorithm – which is inconsistent with the method of testing that they formally studied.
    Interview
    Line (geometry)
    Political correctness
    Citations (27)
    It has been argued in relation to Old Babylonian mathematical procedure texts that their validity or correctness is self-evident. One “sees” that the procedure is correct without it having, or being accompanied by, any explicit arguments for the correctness of the procedure. Even when agreeing with this view, one might still ask about how is the correctness of a procedure articulated? In this work, we present an articulation of the correctness of ancient Egyptian and Old Babylonian mathematical procedure texts – mathematical texts presenting the solution of problems. We endeavor to make explicit and explain how and why the procedures are reliable over and above the fact that their correctness is intuitive.
    Political correctness
    Articulation (sociology)
    Summary form only given, as follows. The complete presentation was not made available for publication as part of the conference proceedings. Correctness is an important issue in computer science and software engineering. For concurrent systems, the definition of correctness is usually based on properties of infinite execution paths. Bounded correctness is a kind of correctness defined on finite paths, and provides a different view on the issue of correctness. This talk focuses on the concept of bounded correctness and the relation between correctness and bounded correctness. For the purpose of verification, the definition of correctness based on infinite paths is not directly applicable as a means for verification, and the approaches for such a purpose include those based on the analysis of strongly connected components and on the computation of fixed points. On the other hand, correctness may be verified in terms of bounded correctness by an approach derived from the definition of bounded correctness. The complementariness of these verification approaches is explained.
    Existing computational solutions for stepwise correctness checking of free-response solution schemes consisting of equations only consider providing qualitative feedbacks. Hence, this research intends to propose a computational model of a typical stepwise correctness checking of a scheme of student-constructed responses normally (usually) performed by a human examiner with the provision of quantitative feedbacks. The responses are worked solutions on solving linear algebraic equations in one variable. The proposed computational model comprises of computational techniques of key marking processes, and has enabled a marking engine prototype, which has been developed based on the model, to perform stepwise correctness checking and scoring of the response of each step in a working scheme and of the working scheme as well. The assigned numeric score of each step, or analytical score, serves as a quantitative feedback to inform students on the degree of correctness of the response of a particular step. The numeric score of the working scheme, or overall score, indicates the degree of correctness of the whole working scheme. Existing computational solutions that are currently available determine response correctness based on mathematical equivalence of expressions. In this research, the degree of correctness of an equation is based on the structural identicalness of the constituting mathtokens, which is evaluated using a correctness measure formulated in this research. The experimental verification shows that the evaluation of correctness by the correctness measure is comparable to human judgment on correctness. The computational model is formalized mathematically by basic concepts from Multiset Theory, while the process framework is supported by basic techniques and processes problem of this research. The data used are existing worked solutions on solving linear algebraic equations in one variable from a previous pilot study as well as new sets of test of correctness shows that the computational model is able to generate the expected output. Hence, the underlying computational techniques of the model can be regarded as correct. The agreement between the automated and the manual marking methods were analysed in terms of the agreement between the correctness scores. The method agreement analyses were conducted in two phases. The analysis in Phase I involved a total of 561 working schemes which comprised of 2021 responses and in Phase II a total of 350 working schemes comprising of 1385 responses were used. The analyses involved determining the percent agreement, degree of correlation and degree of agreement between the automated and manual scores. The accuracy of the scores was determined by calculating the average absolute errors present in the automated scores, which are calibrated by the average mixed errors. The results show that both the automated analytical scores and the automated overall scores exhibited high percent agreement, high correlation, high degree of agreement and small average absolute and mixed errors. It can be inferred that the automated scores are comparable with manual scores and that the stepwise correctness checking and scoring technique of this research agrees with the human marking technique. Therefore, the computational model of stepwise quantitative assessment is a valid and reliable model to be used in place of a human examiner to check and score responses to similar questions used in this research for both formative and summative assessment settings.
    Citations (0)
    Relative correctness is the property of a program to be more-correct than another with respect to a given specification. Whereas the traditional definition of (absolute) correctness divides candidate program into two classes (correct, and incorrect), relative correctness arranges candidate programs on the richer structure of a partial ordering. In other venues we discuss the impact of relative correctness on program derivation, and on program verification. In this paper, we discuss the impact of relative correctness on program testing; specifically, we argue that when we remove a fault from a program, we ought to test the new program for relative correctness over the old program, rather than for absolute correctness. We present analytical arguments to support our position, as well as an empirical argument in the form of a small program whose faults are removed in a stepwise manner as its relative correctness rises with each fault removal until we obtain a correct program.
    Argument (complex analysis)
    Citations (3)
    Coverage-based fault localization is a spectrum-based technique that identifies the executing program elements that correlate with failure. However, the effectiveness of coverage-based fault localization suffers from the effect of coincidental correctness which occurs when a fault is executed but no failure is detected. Coincidental correctness is prevalent and proved as a safety reducing factor for the coverage-based fault location techniques. In this paper, we propose a new fault-localization approach based on the coincidental correctness probability. We estimate the probability that coincidental correctness happens for each program execution using dynamic data-flow analysis and control-flow analysis. To evaluate our approach, we use safety and precision as evaluation metrics. Our experiment involved 62 seeded versions of C programs from SIR. We discuss the comparison results with Tarantula and two improved CBFL techniques cleansing test suites from coincidental correctness. The results show that our approach can improve the safety and precision of the fault-localization technique to a certain degree.
    Control flow
    Citations (5)