ON GENERALIZED REUSABLE VERIFICATION ENVIRONMENT BASED TESTING

2014 
Testing can be subdivided into defining or generating test inputs and test scenarios, specifying test oracles to judge testing results, and executing test cases. In this paper we introduce different techniques to tackle these problems, taking into account agent's properties and investigate automated ways to generating test inputs that can produce enormous number of different and challenging situations to exercise the agents under test. The automated generation to some extent helps dealing with the dynamic nature of the environments where the agents under test operate. In this paper we employ, three approaches to evaluate behaviours of software agents. As agents are autonomous, saying if an agent exhibits a correct behaviour or not is not as straightforward as traditional programs. A test evaluation, i.e. to evaluate test results in the first place as feedbacks from test results give important insights to guide the automated test input generation. Then, introduce monitoring as a way to collect data about test execution. The monitoring technique can deal with the distributed and asynchronous property of agent-based systems and provide a global view during test execution. Finally, we present different test generation methodologies to reveal the faults.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    1
    Citations
    NaN
    KQI
    []