In the cloud environment, the transfer of data from one cloud server to another cloud server is called migration. Data can be delivered in various ways, from one data centre to another. This research aims to increase the migration performance of the virtual machine (VM) in the cloud environment. VMs allow cloud customers to store essential data and resources. However, server usage has grown dramatically due to the virtualization of computer systems, resulting in higher data centre power consumption, storage needs, and operating expenses. Multiple VMs on one data centre manage share resources like central processing unit (CPU) cache, network bandwidth, memory, and application bandwidth. In multi-cloud, VM migration addresses the performance degradation due to cloud server configuration, unbalanced traffic load, resource load management, and fault situations during data transfer. VM migration speed is influenced by the size of the VM, the dirty rate of the running application, and the latency of migration iterations. As a result, evaluating VM migration performance while considering all of these factors becomes a difficult task. The main effort of this research is to assess migration problems on performance. The simulation results in Matlab show that if the VM size grows, the migration time of VMs and the downtime can be impacted by three orders of magnitude. The dirty page rate decreases, the migration time and the downtime grow, and the latency time decreases as network bandwidth increases during the migration time and post-migration overhead calculation when the VM transfer is completed. All the simulated cases of VMs migration were performed in a fuzzy inference system with performance graphs.
Skin cancer is a major type of cancer with rapidly increasing victims all over the world. It is very much important to detect skin cancer in the early stages. Computer-developed diagnosis systems helped the physicians to diagnose disease, which allows appropriate treatment and increases the survival ratio of patients. In the proposed system, the classification problem of skin disease is tackled. An automated and reliable system for the classification of malignant and benign tumors is developed. In this system, a customized pretrained Deep Convolutional Neural Network (DCNN) is implemented. The pretrained AlexNet model is customized by replacing the last layers according to the proposed system problem. The softmax layer is modified according to binary classification detection. The proposed system model is well trained on malignant and benign tumors skin cancer dataset of 1920 images, where each class contains 960 images. After good training, the proposed system model is validated on 480 images, where the size of images of each class is 240. The proposed system model is analyzed using the following parameters: accuracy, sensitivity, specificity, Positive Predicted Values (PPV), Negative Predicted Value (NPV), False Positive Ratio (FPR), False Negative Ratio (FNR), Likelihood Ratio Positive (LRP), and Likelihood Ratio Negative (LRN). The accuracy achieved through the proposed system model is 87.1%, which is higher than traditional methods of classification.
Alzheimer's disease is a severe neuron disease that damages brain cells which leads to permanent loss of memory also called dementia. Many people die due to this disease every year because this is not curable but early detection of this disease can help restrain the spread. Alzheimer's is most common in elderly people in the age bracket of 65 and above. An automated system is required for early detection of disease that can detect and classify the disease into multiple Alzheimer classes. Deep learning and machine learning techniques are used to solve many medical problems like this. The proposed system Alzheimer Disease detection utilizes transfer learning on Multi-class classification using brain Medical resonance imagining (MRI) working to classify the images in four stages, Mild demented (MD), Moderate demented (MOD), Non-demented (ND), Very mild demented (VMD). Simulation results have shown that the proposed system model gives 91.70% accuracy. It also observed that the proposed system gives more accurate results as compared to previous approaches.
The need for sustainable development, coupled with the growth in industrialization, creates a complex environment in which businesses strive to achieve and maintain a competitive advantage. Information now forms a vital part of how firms perform in today’s globalized corporate world. This paper explores the impact of information systems on sustainable organizational operations. Furthermore, it observes how IT infrastructure and information security policy (ISP) play vital roles in the changing business environment. The importance of information security culture (ISC) as a mediator in developing the association between the independent and dependent variables is also investigated. Reviewing these categories’ interactions within the context of transitional economics is the main goal. To assess and predict the impact of ISs, ISP, ITI, and ISC on sustainable organizational performance (SOP), 214 businesses took part in a structured survey. For data cleaning and reliability analysis, SPSS software was used; for mediation analysis, the Preacher and Hayes approach was applied; and, for multiple linear regression analysis, Python was applied. The study is significant for developing countries in the role of IS for the effectiveness of IT governance and strategic integration. The findings indicate that organizational performance is substantially impacted by information security policy (ISP), IT infrastructure (ITI), and information security culture (ISC).
Mobile devices and apps have become an essential part of our daily life activities. Multi-touch gesture interaction directly on the touch screen is one of the most common ways to interact with mobile devices. However, in special circumstances (e.g., disabilities, wet hands, wearing heavy gloves outside in cold weather, etc.) it is difficult to interact directly on the touch screen. In this work, we focus on utilizing the 3D accelerometer sensor, available in most of the current mobile devices, as a way to provide an alternative set of gestures to the standard set of multi-touch gestures. We defined these 3D accelerometer-based gestures' definitions based on a user study and built an opens-source library, called 3DA-Gest, for providing the functionality to be used by mobile application developers. Further, we built a proof of concept map-based mobile app to check the working of our library. The preliminary conducted user study shows that users prefer to use our accelerometer-based gestures in special circumstances.
Data access latency can be reduced for databases by using caching. Semantic caching enhances the performance of normal caching by locally answering the fully as well as partially overlapped queries. Efficient query processing and cache management are major challenges for semantic caching. Semantic caching demands efficient, correct and complete algorithms to process incoming queries. Query processing over semantic cache has been extensively studied by the researchers. In this paper, we have presented the survey of the existing techniques on query processing over semantic cache. We have defined a criterion to evaluate and analyze the query processing techniques. On the basis of defined criterion, analysis matrix is provided for quick insights on prominent features and limitations of query processing techniques. We have concluded that evaluated techniques are not capable of zero level query rejection and have large runtime complexity for handling queries. These deficiencies are the basic factors to decline the efficiency of query processing. The performance comparison of two techniques is also represented.
Document classification is an important task in data mining. Currently, identifying category (i.e., topic) of a scientific publication is a manual task. The Association for Computing Machinery Computing Classification System (ACM CCS) is most wildly used multi-level taxonomy for scientific document classification. Correct classification becomes difficult with an increase in number of levels as well as in number of categories. Domain overlapping aggravates this problem as a publication may belong to multiple domains. Thus manual classification to taxonomy becomes more difficult. Most of the existing text classification schemes are based on the Term Frequency and Inverse Document Frequency (TF-IDF) technique. Similar approaches become computationally inefficient for large datasets. Most of the techniques for text classification are not experimentally validated on scientific publication datasets. Also, multi-level and multi-class classification is missing in most of the existing schemes for document classification. The proposed approach is based on metadata (i.e., structural representation), in which only the title and keywords are considered. We reduced the features set by dropping some of the metadata, like abstract section of the scientific publication that diversifies the result accuracy. The proposed solution was inspired from the well-known evolutionary Particle Swarm Optimization (PSO). The proposed technique results in overall 84.71% accuracy on Journal of Universal Computer Science (J.UCS) dataset.