Internet of Things is a widespread technology that comprises several networking solutions for connecting things to the rest of the Internet. Understanding the characteristics of such solutions is fundamental in order to satisfy the diverse requirements of all the possible applications. The goal of this paper is to empirically evaluate and compare the performance of the most spread technologies for IoT low-power long-range communications. The parameters considered are the energy efficiency, the message losses, and the latency of a message. Obtained results show that up to 2% of messages can be lost when using LoRaWAN, but the latency is always smaller than 7 seconds. NB-IoT shows slightly larger latency values and more delivered messages than LoRaWAN. The messages sent using Sigfox are always correctly delivered, but the communication introduces delays up to 100 seconds. These results are of great importance for all the players in the IoT scenario interested in LPWAN technologies. The results of this study show how different communication technologies provide significantly different performance.
Localization within a Wireless Sensor Network consists of defining the position of a given set of sensors by satisfying some non-functional requirements such as (1) efficient energy consumption, (2) low communication or computation overhead, (3) no, or limited, use of particular hardware components, (4) fast localization, (5) robustness, and (6) low localization error. Although there are several algorithms and techniques available in literature, localization is viewed as an open issue because none of the current solutions are able to jointly satisfy all the previous requirements. An algorithm called ROCRSSI appears to be a suitable solution; however, it is affected by several inefficiencies that limit its effectiveness in real case scenarios. This paper proposes a refined version of this algorithm, called ROCRSSI++, which resolves such inefficiencies using and storing information gathered by the sensors in a more efficient manner. Several experiments on actual devices have been performed. The results show a reduction of the localization error with respect to the original algorithm. This paper investigates energy consumption and localization time required by the proposed approach.
The management of Grid systems commonly lacks information for identifying the failures that may hinder the timely completion of jobs, and cause the wasting of computing resources. Monitoring can certainly help, but novel approaches need to be conceived for such large and geographically distributed systems. We propose a Grid Architecture for scalable Monitoring and Enhanced dependable job ScHeduling (GAMESH). GAMESH is a completely distributed and highly efficient management infrastructure for the dissemination of monitoring data and troubleshooting of job execution failures in large-scale and multi-domain Grid environments. Challenged in a real deployment and compared to other Grid management systems, GAMESH demonstrates to (i) ensure measurements of both computing resources and conditions of task scheduling at geographically sparse sites, while inducing a low overhead on the entire infrastructure, and (ii) enable failure-aware scheduling and improve overall system performance, even in the presence of failures, by coordinating local job schedulers at multiple domains.
In a large Infrastructure-as-a-Service (IaaS) cloud, component failures are quite common. Such failures may lead to occasional system downtime and eventual violation of Service Level Agreements (SLAs) on the cloud service availability. The availability analysis of the underlying infrastructure is useful to the service provider to design a system capable of providing a defined SLA, as well as to evaluate the capabilities of an existing one. This paper presents a scalable, stochastic model-driven approach to quantify the availability of a large-scale IaaS cloud, where failures are typically dealt with through migration of physical machines among three pools: hot (running), warm (turned on, but not ready), and cold (turned off). Since monolithic models do not scale for large systems, we use an interacting Markov chain based approach to demonstrate the reduction in the complexity of analysis and the solution time. The three pools are modeled by interacting sub-models. Dependencies among them are resolved using fixed-point iteration, for which existence of a solution is proved. The analytic-numeric solutions obtained from the proposed approach and from the monolithic model are compared. We show that the errors introduced by interacting sub-models are insignificant and that our approach can handle very large size IaaS clouds. The simulative solution is also considered for the proposed model, and solution time of the methods are compared.
The use of large scale processing systems has exploded during the last decade and now they are indicated for significantly contributing to the world energy consumption and, in turn, environmental pollution. Processing systems are no more evaluated only for their performance, but also for how much they consume to perform at a certain level. Those evaluations aim at quantifying the energy efficiency conceived as the relation between a performance metric and a power consumption metric, disregarding the malfunction that commonly happens. The study of a real 500-nodes batch system shows that 9% of its power consumption is ascribable to failures compromising the execution of the jobs. Also fault tolerance techniques, commonly adopted for reducing the frequency of failure occurrences, have a cost in terms of energy consumption.
This dissertation introduces the concept of consumability for processing systems, encompassing performance, consumption and dependability aspects. The idea is to have a unified measure of these three main aspects. The consumability analysis is also described. It is performed by means of a hierarchical stochastic model that considers the three aspects simultaneously in the process of evaluating the system efficiency and effectiveness. The analysis represents a solution to system owners and administrators that need to evaluate cost-benefit trade-off during the design, development, testing and operational phases.
The analysis is illustrated for two case studies based on a real batch processing system. The studies provides a set of guidelines for the consumability analysis of other systems and empirically confirm the importance of contemplating dependability jointly with performance and consumption for making processing systems really energy efficient.
Critical Infrastructures (CIs), such as smart power grids, transport systems, and financial infrastructures, are more and more vulnerable to cyber threats, due to the adoption of commodity computing facilities. Despite the use of several monitoring tools, recent attacks have proven that current defensive mechanisms for CIs are not effective enough against most advanced threats. In this paper we explore the idea of a framework leveraging multiple data sources to improve protection capabilities of CIs. Challenges and opportunities are discussed along three main research directions: i) use of distinct and heterogeneous data sources, ii) monitoring with adaptive granularity, and iii) attack modeling and runtime combination of multiple data analysis techniques.
Invariants represent properties of a system that are expected to hold when everything goes well. Thus, the violation of an invariant most likely corresponds to the occurrence of an anomaly in the system. In this paper, we discuss the accuracy and the completeness of an anomaly detection system based on invariants. The case study we have taken is a back-end operation of a SaaS platform. Results show the rationality of the approach and discuss the impact of the invariant mining strategy on the detection capabilities, both in terms of accuracy and of time to reveal violations.
Energy efficiency of large processing systems is usually assessed as the relation between a performance and a power consumption metric, neglecting malfunction. Execution failures have a tangible cost in terms of wasted energy, however. They are often managed through fault tolerance mechanisms, which in turn consume electricity. We introduce the consumability attribute for batch processing systems, encompassing performance, consumption, and dependability aspects altogether. We propose a metric for its quantification and a methodology for its analysis. Using a real 500-node batch system as a case study, we show that consumability is representative of both efficiency and effectiveness, and we show the usefulness of the proposed metric and the suitability of the proposed methodology.