Usually, developers suffer from usability related problems during working with software development tools. Such problems should be detected to improve developer team performance. To tackle this problem, pattern discovery techniques can be used to locate usability problems by analyzing the feedback of users extracted from software repositories. In this paper, a comprehensive data analysis methodology is presented to extract hidden knowledge for acquiring user challenges regarding tools interaction. The main motivation of this paper is to involve the role of knowledge in software development process such as Agile development. Rich user feedback datasets from StackOverflow programming Question and Answer (Q&A) repository have been used as the input of Apriori algorithm while required preprocessing has been considered. The generated results are association rules representing the usability problem patterns among tools and technologies interactions. The results can also be used for effort planning when a software upgrade needs to be considered.
The main purpose of this study is a core neutronic parameter evaluation via the Monte Carlo method, for the SMART reactor which is a certified design Small Modular Reactor (SMR). The SMART neutronic parameters such as axial and radial distributions of neutron fluxes, Power Peaking Factors (PPFs), effective delayed neutron fraction, xenon and samarium effects, burnup calculation and neutron flux energy spectrum have been assessed. Daily load follow operation in soluble-boron-free with control regulating banks is one of the best SMART reactor core advantages. Accordingly, the effects of main regulating bank insertion in SMART core have been evaluated. For developed model verification, some of the neutronic parameters have been compared with the SMART Standard Safety Analysis (SSAR) and show proper match. Then other neutronic parameters of the SMART core as a pioneer SMR have been calculated and evaluated.
Wireless sensor networks (WSNs) play a prominent role in the world of computer networks. WSNs rely on deployment as a basic requirement and an effective factor on the basic network services. In deployment, creating a balance between conflicting optimisation factors, e.g. connectivity and coverage, is a challenging and sophisticated issue, so that deployment turns into an NP-complete problem. The majority of existing researches has attempted to tackle this problem by applying classic single-objective metaheuristic algorithms in 2D small-scale uniform environments. In this study, a new hybrid multi-objective optimisation algorithm, which is constructed by the combination of multi-objective bee algorithms and Levy flight (LF) random walk is proposed to deal with the deployment problem in WSNs. For this purpose, two of the most important criteria, connectivity and coverage, have been considered as objectives. A series of experiments are carried out in large-scale non-uniform 3D environments, despite the fact that most of the present methods are applicable in small-scale uniform 2D environments. This study completely takes into account the stochastic behaviour of swarms, something that other papers do not consider. The evaluation results show that the multi-objective LF bee algorithm, in most cases, surpasses NSGAII, IBEA and SPEA2 algorithms.
Software product line represents software engineering methods, tools and techniques for creating a group of related software systems from a shared set of software assets. Each product is a combination of multiple features. These features are known as software assets. So, the task of production can be mapped to a feature subset selection problem which is an NP-hard combinatorial optimization problem. This issue is much significant when the number of features in a software product line is huge. In this paper, a new method based on Multi Objective Bee Swarm Optimization algorithm (called MOBAFS) is presented. The MOBAFS is a population based optimization algorithm which is inspired by foraging behavior of honey bees. The is used to solve a SBSE problem. This technique is evaluated on five large scale real world software product lines in the range of 1,244 to 6,888 features. The proposed method is compared with the state-of-the-art, SATIBEA. According to results of three solution quality indicators and two diversity metrics, the proposed method, in most cases, surpasses the other algorithm.
Appropriate delineation of rain gauge stations is a classic problem in operational hydrology. The current literature on rain gauge network design considers various simplifications to bypass the curse of dimensionality. This paper presents a new methodology for optimum rain gauge network design with no simplification involved. To the best of the authors' knowledge, this is the first time whereby geostatistical tools are coupled with artificial bee colony (ABC) to prioritize rain gauge stations. To evaluate the effectiveness of the proposed methodology, the coupled algorithm is applied to a case study with 34 existing rain gauge stations in the south-western part of Iran. The developed methodology is quite robust, efficient and fills a gap in existing methodologies. It has few control parameters, therefore accelerating the convergence speed remarkably. The results show that the proposed approach compares well with existing paradigms in rain gauge network design. In particular, the measure of network accuracy imitates the time-consuming paradigm for small and large values of number of holding stations, while it will fill the gap for intermediate values where a benchmark solution is not available. In conclusion, the proposed scheme can be taken as a yardstick to evaluate the effectiveness of existing paradigms in network design for intermediate values of holding stations.
In this work, a Multi-Level Artificial Bee Colony (called MLABC) is presented. In MLABC two species are used. The first species employs n colonies in which each of the them optimizes the complete solution vector. The cooperation between these colonies is carried out by exchanging information through a leader colony, which contains a set of elite bees. The second species uses a cooperative approach in which the complete solution vector is divided to k sub-vectors, and each of these sub-vectors is optimized by a colony. The cooperation between these colonies is carried out by compiling sub-vectors into the complete solution vector. Finally, the cooperation between two species is obtained by exchanging information between them. The proposed algorithm is tested on a set of well known test functions. The results show that MLABC algorithms provide efficiency and robustness to solve numerical functions.
Software Defined Networks split the data plane from the control plane. They can be used in wireless networks and will bring flexibility, less interference, simple management, less energy consumption and load balancing. They can also improve service quality, handover and mobility between different service providers. In previous methods, when link states were changed the controller deleted the stored topology and the tables in switches and updated them. Therefore, the controller should not only build the topology, but also execute the routing algorithms and install the routes in switch tables again. This may not cause any problems in wired networks, since link failure may not happen too often, but in wireless networks this happens frequently. In this paper the problems of (1) dynamic topology, (2) removing the whole flow table due to topology changes, and (3) time consuming routing algorithms are addressed. In the proposed method, after building a virtual topology with virtual node coordinates, the controller executes a heuristic routing algorithm on this topology. When the topology changes, the action of omitting rules from tables is applied only on routes including omitted or modified links and the rules related to missing routes should be deleted. Finally, the proposed method is compared with the L2_multi method (method used in POX controller). Results show that the proposed method decreases the average delay.
The applications of Artificial Intelligence (AI) methods especially machine learning techniques have increased in recent years. Classification algorithms have been successfully applied to different problems such as requirement classification. Although these algorithms have good performance, most of them cannot explain how they make a decision. Explainable Artificial Intelligence (XAI) is a set of new techniques that explain the predictions of machine learning algorithms. In this work, the applicability of XAI for software requirement classification is studied. An explainable software requirement classifier is presented using the LIME algorithm. The explainability of the proposed method is studied by applying it to the PROMISE software requirement dataset. The results show that XAI can help the analyst or requirement specifier to better understand why a specific requirement is classified as functional or non-functional. The important keywords for such decisions are identified and analyzed in detail. The experimental study shows that the XAI can be used to help analysts and requirement specifiers to better understand the predictions of the classifiers for categorizing software requirements. Also, the effect of the XAI on feature reduction is analyzed. The results showed that the XAI model has a positive role in feature analysis.
Introduction: One of the main causes of medical errors is drug interaction which occurs when a drug decreases or increases the effect of another drug. Drug interactions occur as a result of changes in pharmacodynamics, pharmacokinetics, or a combination of both. Due to the problems caused by these errors and lack of an efficient system of automatic diagnosis of drug interactions, and also since a large amount of these interactions can be prevented, we aimed to search for drug interactions in the medical texts and also classify and identify the best algorithm. Methods: A two‑stage classification was used to solve the problem of unbalanced data dispersion in drug interaction classes. A subset of the most suitable features was identified for classification. In the first step of designing a binary classification, pairs of drugs which interact with each other and those which do not be separated. Then, we classified the pairs of drug interactions in one of the following four classes: effect, advice, mechanism, and int. In this study, different algorithms were used in both types of classifications, based on the type of data and expert opinion. To validate the first‑stage model, we considered 90% of the data as training data and the rest were considered as the test data. To validate the second‑stage model, we used the difference verification method. Weka data analysis software was also used for designing the model; then, the classification was made. Results: The results showed that the most appropriate features were mutual information (obtaining a score of 1000) and parts of speech. The efficiency of J48 algorithm in the stage of separating the drugs with and without interaction (F‑measure = 0.914) and also in the multiclass stage of the bagging algorithm (F‑measure = 0.915) was the highest among other algorithms. ZeroR algorithm required the shortest time to build the model (less than half a second) in both stages. Conclusion: According to the results of J48 algorithms and random forest, it can be concluded that decision tree is the most appropriate approach in the extraction and automatic classification of drug interactions, using the features derived from the text to be applied in clinical decision support system.