Background: Fog computing paradigm has recently emerged and gained higher attention in present era of Internet of Things. The growth of large number of devices all around, leads to the situation of flow of packets everywhere on the Internet. To overcome this situation and to provide computations at network edge, fog computing is the need of present time that enhances traffic management and avoids critical situations of jam, congestion etc. Methods: For research purposes, there are many methods to implement the scenarios of fog computing i.e. real-time implementation, implementation using emulators, implementation using simulators etc. The present study aims to describe the various simulation and emulation tools for implementing fog computing scenarios. Results: Review shows that iFogSim is the simulator that most of the researchers use in their research work. Among emulators, EmuFog is being used at higher pace than other available emulators. This might be due to ease of implementation and user-friendly nature of these tools and language these tools are based upon. The use of such tools enhance better research experience and leads to improved quality of service parameters (like bandwidth, network, security etc.). Conclusion: There are many fog computing simulators/emulators based on many different platforms that uses different programming languages. The paper concludes that the two main simulation and emulation tools in the area of fog computing are iFogSim and EmuFog. Accessibility of these simulation/emulation tools enhance better research experience and leads to improved quality of service parameters along with the ease of their usage.
Cancer is the most frequent illness that cannot be overlooked and ends in death; early diagnosis helps to navigate the most effective care to save human lives. Cancer can develop anywhere in the human body, but Hematologic (blood) cancer and solid tumor cancers are two main types of cancer. A solid tumor is a type of nodule, but not all types of nodules are cancer. In some instances, a cancer diagnosis is created on the doctor's intuition, leading to the certain patient getting ignored and having complications. During the past few years, machine learning has been proved a popular and powerful method in many medical diagnosis areas and outperformed classical methods. This chapter mainly focuses on two types of nodules: Malignant (nodules that are cancer) and Benign (nodules that are not cancer). In this chapter, five supervised ML algorithms (Logistic Regression, SVM, Naïve Bayes, Decision Tree and KNN) are applied on two different types of datasets (breast and lung cancer). For cancer dataset classification main features of the algorithms are discussed. Their performance is analyzed based on their accuracy. The existing technique consists of five stages: pre-processing of the dataset, feature selection, training, testing the model and performance analysis.
The network of resource constraint devices, also known as the Low power and Lossy Networks (LLNs), constitutes the edge tire of the Internet of Things applications like smart homes, smart cities, and connected vehicles. The IPv6 Routing Protocol over Low power and lossy networks (RPL) ensures efficient routing in the edge tire of the IoT environment. However, RPL has inherent vulnerabilities that allow malicious insider entities to instigate several security attacks in the IoT network. As a result, the IoT networks suffer from resource depletion, performance degradation, and traffic disruption. Recent literature discusses several machine learning algorithms to detect one or more routing attacks. However, IoT infrastructures are expanding, and so are the attack surfaces. Therefore, it is essential to have a solution that can adapt to this change. This paper introduces a comprehensive framework to detect routing attacks within Low Power and Lossy Networks (LLNs). The proposed solution leverages deep learning by combining Restricted Boltzmann Machine (RBM) and Long Short-Term Memory (LSTM). The framework is trained on 11 network parameters to understand and predict normal network behavior. Anomalies, identified as deviations from the forecast trends, serve as indicators of potential routing attacks and thus address vulnerabilities in the RPL.
Background: Generally, it is observed that there is not a single algorithm that classifies the task using Quality of Service (QoS) parameters requested by the task but instead focuses on classifying resources and balancing the task's using the availability of resources. In past literature, authors divided the load balancing solutions in three main parts workload estimation, decision making, task transferring. Workload estimation deals with identifying requirements for the incoming tasks on the system. Decision making is done to analyze that whether or not load balancing should be performed for the given node. If the decision for load balancing has been made then third step deals with transferring task to appropriate node to reach a saturation point where the system will be in the stable state. Objective: To address this issue, our approach is more focused upon on workload estimation and its main objective is to cluster the incoming heterogeneous task into generic groups. Another issue for this approach is that the client demand varies for the number of tasks. Thus, some attributes may be much more critical to a user then the others and this demand changes from user to user. Methods: This paper classify the tasks using QoS parameters and focused on work-load estimation. The main objective is to cluster the incoming heterogeneous task into generic groups. For this, KMedoid based clustering approach for cloud computing is devised and implemented. This approach is then compared with its different iterations to analyses the workload execution more deeply. Results: The analysis of our approach is computed using cloudsim simulator. Results and computations shows that the data is very uneven in initial times, as some clusters have only four elements and others are having much more elements. Whereas after the 20th iteration data observed is more normally balanced, so the clusters formed after 20th iteration were more stable than clusters formed initially i.e. 1st iteration. The number of iterations is also minimized to do unnecessary clustering as after a few steps the changes in medoids are very less. Conclusion: A brief survey of various load balancing techniques in cloud computing is discussed. These approaches are meta-heuristic in nature and have complex behavior and can be implemented in cloud computing. In our paper, K-Medoid based clustering approach for identifying the task into similar groups has also been implemented. Implementation is done on cloudsim simulation package provided by Cloud Labs, which is a java based open source package. The results obtained in our approach are limited to classification of tasks into various clusters. It would also useful where new task arrives and simply assign it to a VM that was created for some other element of that class. In future, this work can be expanded to create an effective clustering based model for load balancing.
In the last decades, the domain of IoT has been explored by research community due to its vast real time applications. A combination of deep learning and IoT is well accepted worldwide as using deep learning, IoT devices can be easily converted into intelligence devices. Moreover, these devices are capable enough to take the decision based on real-time data. However, deployment of deep learning model is not so easy in IoT devices as these devices are constraint with limited computational power and storage space. Generally, deep learning architectures are large in terms of storage space, and due to the complication of model, it required resources to generate the output. To overcome the storage space and the large resource barrier, we proposed the method based on the particle swarm optimization technique for compression of the UNet architecture for its easy deployment on IoT devices for semantic segmentation usages. In this paper, all the intermediate steps involved for this compression of UNet using PSO is well explained with suitable examples. Experimentally, it has been proven that the proposed algorithm compresses the UNet architecture in the chest radiograph data set by 77% after 0. 68% decrease in accuracy with an improvement in the inference time by 2.23X.
The Domain Name System (DNS) is a hierarchical naming system that is built on a distributed database for computers, services, or any other resource connected to the Internet or a private network. It translates the domain names meaningful to humans into the numerical identifiers associated with the networking equipment for the purpose of locating and addressing these devices worldwide[1]. The job of a DNS is to convert the human readable addresses entered on the address bar of the browser into machine readable IP addresses. DNS spoofing is a term that refers to the action of answering a DNS request that was intended for another server (a ?real? DNS server). This arrangement can be in a server-server exchange (a DNS server asks