This paper proposes the design of specific programme dedicated for students who want to specialize in software testing at postgraduate level. The motivation behind this proposal is to encourage more higher learning institutions to offer variety of software engineering related programme, in particular software testing so that this area is recognized as another key contributor for developing competent testing professionals. Although there are many software testing professional certifications available in the market, there is still a need for offering this kind of programme in university at postgraduate level to nurture the importance of software testing in producing quality software. The proposed programme is designed based on coursework basis to mainly cater for the working students. As the structure emphasizes on mix of theories and practical knowledge with the use of suitable testing tools, thus the substance of this programme should at least cover the fundamentals knowledge of testing, every key types of testing performed today such as functional testing, security testing, performance testing, usability testing and compatibility testing as well as latest trends in testing such as testing in agile environment. It is expected that the proposed programme should be able to produce graduates that demonstrate advanced competencies in software testing and able to apply the acquired skills in producing high quality software.
Usability defects test escapee can have a negative impact on the success of software. It is quite common for projects to have a tight timeline. For these projects, it is crucial to ensure there are effective processes in place. One way to ensure project success is to improve the manual processes of the usability inspection via automation. An automated usability tool will enable the evaluator to reduce manual processes and focus on capturing more defects in a shorter period of time. Thus improving the effectiveness of the usability inspection and minimizing defects escapee. There exist many usability testing and inspection methods. The scope of this paper is on the Heuristic Evaluation (HE) procedures automation. The Usability Management System (UMS) was developed to automate as many manual steps as possible throughout the software development life cycle (SDLC). It is important for the various teams within the organization to understand the benefits of automation. The results show that with the help of automation more usability defects can be detected. Hence, enhancing the effectiveness of usability evaluation by an automated Heuristic Evaluation System is feasible.
The reviews of literature have shown that many website usability problems are found late in the Software Development Life Cycle (SDLC). This is due to the usage of traditional usability testing techniques which are not sufficient and suitable with the growing complexity of websites and constraints faced by usability practitioners. For instance, the Lab Based Usability Testing (LBUT) is expensive and has lesser coverage than Exploratory Heuristics Evaluation (EHE) while the EHE is subjected to false alarms. A hybrid usability methodology (HUM) comprising of LBUT and EHE is proposed. Six experiments involving EHE and LBUT were performed at the early, intermediate and advanced stages of the SDLC of websites, during which the optimal relative performance of each method were measured using the dependent variables followed by the design of a HUM. To validate the HUM, four case studies were conducted, during which significant improvements were observed in website effectiveness and efficiency. Based on the findings, HUM is a feasible approach for usability practitioners and also provides stakeholders a validated situational decision making framework for usability testing strategies taking into account real world constraints.
Defect prediction is an important aspect of the Product Development Life Cycle. The rationale in knowing predicted number of functional defects earlier on in the lifecycle, rather than to just find as many defects as possible during testing phase is to determine when to stop testing and ensure all the in-phase defects have been found in-phase before a product is delivered to the intended end user. It also ensures that wider test coverage is put in place to discover the predicted defects. This research is aimed to achieve zero known post release defects of the software delivered to the end user by MIMOS Berhad. To achieve the target, the research effort focuses on establishing a test defect prediction model using Design for Six Sigma methodology in a controlled environment where all the factors contributing to the defects of the product is within MIMOS Berhad. It identifies the requirements for the prediction model and how the model can benefit them. It also outlines the possible predictors associated with defect discovery in the testing phase. Analysis of the repeatability and capability of test engineers in finding defects are demonstrated. This research also describes the process of identifying characteristics of data that need to be collected and how to obtain them. Relationship of customer needs with the technical requirements of the proposed model is then clearly analyzed and explained. Finally, the proposed test defect prediction model is demonstrated via multiple regression analysis. This is achieved by incorporating testing metrics and development-related metrics as the predictors. The achievement of the whole research effort is described at the end of this study together with challenges faced and recommendation for future research work.
The current court system is simply not conducive to ordinary people and small businesses for the resolution of low value claims at proportionate costs, speed and efficiency. Moreover, the lack of knowledge of e-Commerce consumers in online dispute resolution (ODR) and limitation of independent ODR platform in Malaysia has led to consumer dissatisfaction. The objective of this study is to gauge the knowledge level of ODR and identify dissatisfaction among Malaysian and hence propose an independent ODR system. A mixed method approach of literature analysis of human computer interaction and software testing field, together with survey as data collection method is used. Results confirmed the lack of knowledge of ODR among Malaysians and based on the response when faced with a "dissatisfied experience" (pain storming approach), an Independent Evaluation ODR framework is proposed. The proposed framework could be used together with an ODR system, which is similar to an electronic court specifically catered to resolve e-commerce related disputes. In the future, the survey could be extended to a wider audience base and include wider coverage of various stages of the ODR process.
Knowing how good your software is prior to release could indicate whether the software can really work in the actual environment. Executing the system test allows for this to take place. By applying simple analytics approach to the system test cases results of PASS or FAIL for each test strategy imposed, points can be assigned per test case for every test iteration. Then, scores can be calculated. This shall be done to every test tool used per test strategy. The average of accumulated scores from all test strategies is mapped to the predefined rating table to establish software product rating. The proposed approach can be used for complete system testing or ongoing system testing, which serves as early indicator for the software's expected behavior in the actual environment.
All models are wrong; some models are useful. This book describes the academia-industry efforts in adopting Six Sigma methodology for building a practical prediction model for functional test defects in system testing phase. The focus is emphasized on the rational behind the research and systematic way of doing it based on Design for Six Sigma. An overview of Six Sigma is provided as quick understanding to the audience about what the methodology really is and why it is selected for this effort. The research also highlights the use of metrics prior to testing in building up the model. Regression analysis is applied for analyzing the metrics, which later becomes the significant factors for predicting functional defects in system testing phase. Verification process on the selected model is shown towards the end of the book together with the control plan for continuously enhancing and strengthening the model.