This paper presents a camera-laser projector based system for the real-time estimation of distance to obstacles designed to assist wheelchair users with cognitive impairment. Upon falling under the specified safe distance to an obstacle an alarm alerts that it can be used by the control system to act immediately to avert a possible collision even before the user stops the wheelchair. This system consists of a fisheye camera, which allows to cover a large field of view (FOV) to enable the pattern to be available at all times, and a laser circle projector mounted on a fixed baseline. The approach uses the geometrical information obtained by the projection of the laser circle onto the plane simultaneously perceived by the camera. We show a theoretical study of the system in which the camera is modelled as a sphere and show that the estimation of a conic on this sphere allows to estimate the distance between wheel chair and obstacle. We propose some experiments based on simulated data followed by real sequences. The estimated distances from our method are comparable with commercial sensors in terms of its accuracy and correctness. The results from our cheaper system over the expensive commercial sensors prove its suitability for a cheap wheelchair able to assist users with cognitive impairments. The proposed solution is functional in low light to dark environments, where the decision making can be a challenge by the user.
Software industry has matured with time, from small application of few lines of codes to software application of millions of lines of code. In the past few years, the concern of the industry regarding software size estimation has been the convertibility issue between the International Function Point User Group (IFPUG) and the COmmon Software Measurement International Consortium (COSMIC) in order to leverage their huge investment on the IFPUG. Since there is still no cost and effort estimation tool for COSMIC function points. IFPUG is one of the early estimation methods, however, with the introduction of a more scientific method like COSMIC which has a wider applicability than the IFPUG and both method using the same measuring unit and principle, the continued relevancy of the IFPUG is called to question. Due to similar underlining principle of the two methods and for organizations that have invested so much in the IFPUG not to lose all their investment because of migrating to using COSMIC, researchers have been trying to explore the possibility of converting the output of one method to the other. This paper review some of the popular conversion formulas that have been suggested so far to see a trend or how related, consistent and reliable the formulas could be. We estimate the function point of two case studies using the COSMIC and IFPUG. Then we insert our estimation result into the formulas to see how close or diverse the output will be in comparison with our calculation. The result varied widely and nothing conclusive can be said, though, two of the formulas give closer estimation range than others. We also highlight why COSMIC may be more desirable today than the IFPUG and presented the progress level on trying to establish a convertible relationship between the two methods.
The analysis and follow up of asphalt infrastructure using image processing techniques has received increased attention recently. However, the vast majority of developments have focused only on determining the presence or absence of road damages, forgoing other more pressing concerns. Nonetheless, in order to be useful to road managers and governmental agencies, the information gathered during an inspection procedure must provide actionable insights that go beyond punctual and isolated measurements: the characteristics, type, and extent of the road damages must be effectively and automatically extracted and digitally stored, preferably using inexpensive mobile equipment. In recent years, computer vision acquisition systems have emerged as a promising solution for road damage automated inspection systems when integrated into georeferenced mobile computing devices such as smartphones. However, the artificial intelligence algorithms that power these computer vision acquisition systems have been rather limited owing to the scarcity of large and homogenized road damage datasets. In this work, we aim to contribute in bridging this gap using two strategies. First, we introduce a new and very large asphalt dataset, which incorporates a set of damages not present in previous studies, making it more robust and representative of certain damages such as potholes. This dataset is composed of 18,345 road damage images captured by a mobile phone mounted on a car, with 45,435 instances of road surface damages (linear, lateral, and alligator cracks; potholes; and various types of painting blurs). In order to generate this dataset, we obtained images from several public datasets and augmented it with crowdsourced images, which where manually annotated for further processing. The images were captured under a variety of weather and illumination conditions and a quality-aware data augmentation strategy was employed to filter out samples of poor quality, which helped in improving the performance metrics over the baseline. Second, we trained different object detection models amenable for mobile implementation with an acceptable performance for many applications. We performed an ablation study to assess the effectiveness of the quality-aware data augmentation strategy and compared our results with other recent works, achieving better accuracies (mAP) for all classes and lower inference times (3× faster).
Context: The thesis is the analysis work of the replication of software experiment conducted by Natalia and Sira at Technical University of Madrid, SPAIN. The empirical study was conducted for the verification and validation of experimental data, and to evaluate the effectiveness and efficiency of the testing techniques. The analysis blocks, considered for the analysis were observable fault, failure visibility and observed faults. The statistical data analysis involved the ANOVA and Classification package of SPSS. Objective: To evaluate and compare the result obtained from the statistical data analysis. To establish the verification and validation of effectiveness and efficiency of testing techniques by using ANOVA and Classification tree analysis for percentage subject, percentage defect-subject and values (Yes / No) for each of the blocks. RQ1: Empirical evaluation of effectiveness of fault detection testing technique, using data analysis (ANOVA and Classification tree package). For the blocks (observable fault, failure visibility and observed faults) using ANOVA and Classification tree. RQ2: Empirical evaluation of efficiency of fault detection technique, based on time and number of test cases using ANOVA. RQ3: Comparison and inference of the obtained results for both effectiveness and efficiency. Method:The research will be focused on the statistical data analysis to empirically evaluate the effectiveness and efficiency of the fault detection technique for the experimental data collected at UPM (Technical university of Madrid, SPAIN). Empirical Strategy Used: Software Experiment. Results: Based on the planned research work. The analysis result obtained for the observable fault types were standardized (Ch5). Within the observable fault block, both the techniques, functional and structural were equally effective. In the failure visibility block, the results were partially standardized. The program types nametbl and ntree were equally effective in fault detection than cmdline. The result for observed fault block was partially standardized and diverse. The list for significant factors in this blocks were program types, fault types and techniques. In the efficiency block, the subject took less time in isolating the fault in the program type cmdline. Also the efficiency in fault detection was seen in cmdline with the help of generated test cases. Conclusion:This research will help the practitioners in the industry and academic in understanding the factors influencing the effectiveness and efficiency of testing techniques.This work also presents a comprehensive analysis and comparison of results of the blocks observable fault, failure visibility and observed faults. We discuss the factors influencing the efficiency of the fault detection techniques.
Research on damage detection of road surfaces has been an active area of re-search, but most studies have focused so far on the detection of the presence of damages. However, in real-world scenarios, road managers need to clearly understand the type of damage and its extent in order to take effective action in advance or to allocate the necessary resources. Moreover, currently there are few uniform and openly available road damage datasets, leading to a lack of a common benchmark for road damage detection. Such dataset could be used in a great variety of applications; herein, it is intended to serve as the acquisition component of a physical asset management tool which can aid governments agencies for planning purposes, or by infrastructure mainte-nance companies. In this paper, we make two contributions to address these issues. First, we present a large-scale road damage dataset, which includes a more balanced and representative set of damages. This dataset is composed of 18,034 road damage images captured with a smartphone, with 45,435 in-stances road surface damages. Second, we trained different types of object detection methods, both traditional (an LBP-cascaded classifier) and deep learning-based, specifically, MobileNet and RetinaNet, which are amenable for embedded and mobile and implementations with an acceptable perfor-mance for many applications. We compare the accuracy and inference time of all these models with others in the state of the art.