logo
    Power Challenges Caused by IOT Edge Nodes: Securing and Sensing Our World
    4
    Citation
    4
    Reference
    10
    Related Paper
    Citation Trend
    Abstract:
    This paper discusses power challenges caused by an explosion in data driven application at the Edge of IOT. How technology, power management techniques, machine learning (AI) and energy recycling are evolving to do more computing at the Edge of IOT to solve Power, Data, Communication and Latency issues. The paper will show how innovation is needed to bring different topics together to provide an optimal solution for the power challenges ahead. A use case with biometric security at the edge will be discussed.
    Keywords:
    Edge device
    This paper aims to discuss the use of edge computing in medicine with a focus on the analysis of ECG biosignals. Edge computing is a novel paradigm which aims to perform computations (or at least most of them) near the data source achieving some advantages over classical centralized or distributed approaches (e.g. on cloud). After introducing edge computing and the novel NVIDIA Jetson device, the paper presents a use case regarding the classification of ECG biosignals on such a device. Several experiments were conducted to show main differences between traditional and edge-based data analysis approaches. Performance evaluation showed little differences between a traditional approach given the power constrained scenario of edge device. Main results of the paper include an overview of the edge computing paradigm, and a first performance evaluation of deep learning applications on a NVIDIA Jetson device.
    Edge device
    Many real-world applications are widely adopting the edge computing paradigm due to its low latency and better privacy protection. With notable success in AI and deep learning (DL), edge devices and AI accelerators play a crucial role in deploying DL inference services at the edge of the Internet. While prior works quantified various edge devices' efficiency, most studies focused on the performance of edge devices with single DL tasks. Therefore, there is an urgent need to investigate AI multi-tenancy on edge devices, required by many advanced DL applications for edge computing. This work investigates two techniques – concurrent model executions and dynamic model placements – for AI multi-tenancy on edge devices. With image classification as an example scenario, we empirically evaluate AI multi-tenancy on various edge devices, AI accelerators, and DL frameworks to identify its benefits and limitations. Our results show that multi-tenancy significantly improves DL inference throughput by up to 3.3 × − 3.8 × on Jetson TX2. These AI multi-tenancy techniques also open up new opportunities for flexible deployment of multiple DL services on edge devices and AI accelerators.
    Edge device
    Leasehold estate
    Edge computing is a new paradigm enabling intelligent applications for the Internet of Things (IoT) using mobile, low-cost IoT devices embedded with data analytics. Due to the resource limitations of Internet of Things devices, it is essential to use these resources optimally. Therefore, intelligence needs to be applied through an efficient deep learning model to optimize resources like memory, power, and computational ability. In addition, intelligent edge computing is essential for real-time applications requiring end-to-end delay or response time within a few seconds. We propose decentralized heterogeneous edge clusters deployed with an optimized pre-trained yolov2 model. In our model, the weights have been pruned and then split into fused layers and distributed to edge devices for processing. Later the gateway device merges the partial results from each edge device to obtain the processed output. We deploy a convolutional neural network (CNN) on resource-constraint IoT devices to make them intelligent and realistic. Evaluation was done by deploying the proposed model on five IoT edge devices and a gateway device enabled with hardware accelerator. The evaluation of our proposed model shows significant improvement in terms of communication size and inference latency. Compared to DeepThings for 5 X 5 fused layer partitioning for five devices, our proposed model reduces communication size by ~ 14.4% and inference latency by ~16%.
    Edge device
    Citations (30)
    Motivated by the prospects of 5G communications and industrial Internet of Things (IoT), recent years have seen the rise of a new computing paradigm, edge computing, which shifts data analytics to network edges that are at the proximity of big data sources. Although deep neural networks (DNNs) have been extensively used in many platforms and scenarios, they are usually both compute and memory intensive, thus, difficult to be deployed on resource-limited edge devices and in performance-demanding edge applications. Hence, there is an urgent need for techniques that enable DNN models to fit into edge devices, while ensuring acceptable execution costs and inference accuracy. This article proposes an on-demand DNN model inference system for industrial edge devices, called knowledge distillation and early exit on edge (EdgeKE). It focuses on the following two design knobs: first, DNN compression based on knowledge distillation, which trains the compact edge models under the supervision of large complex models for improving accuracy and speed; second, DNN acceleration based on early exit, which provides flexible choices for satisfying distinct latency or accuracy requirements from edge applications. By extensive evaluations on the CIFAR100 dataset and across three state-of-art edge devices, experimental results demonstrate that EdgeKE significantly outperforms the baseline models in terms of inference latency and memory footprint, while maintaining competitive classification accuracy. Furthermore, EdgeKE is verified to be efficiently adaptive to the application requirements on the inference performance. The accuracy loss is within 4.84% under various latency constraints, and the speedup ratio is up to 3.30× under various accuracy requirements.
    Edge device
    Memory footprint
    Speedup
    Citations (34)
    Edge computing is a promising paradigm where resource processing is close to the edge of the Internet. Due to an increasing number of devices forming an interconnected network of devices in the Internet of Things (IoT) leading a huge amounts of data are produced and internet traffic usage has been significantly increased over the years. In this paper, we examine the current edge computing architectures, their challenge associativity availing respective current state-of-the-art solutions. Formerly, we state the enabling features of edge computing in IoT development and some edge computing challenges with respect to their IoT applications. We conclude the paper by summarizing solutions in both IoT and edge computing in the tables, regarding the identified challenges.
    Edge device
    Citations (9)
    Artificial Intelligence (AI) at the edge is the utilization of AI in real-world devices. Edge AI refers to the practice of doing AI computations near the users at the network's edge, instead of centralised location like a cloud service provider's data centre. With the latest innovations in AI efficiency, the proliferation of Internet of Things (IoT) devices, and the rise of edge computing, the potential of edge AI has now been unlocked. This study provides a thorough analysis of AI approaches and capabilities as they pertain to edge computing, or Edge AI. Further, a detailed survey of edge computing and its paradigms including transition to Edge AI is presented to explore the background of each variant proposed for implementing Edge Computing. Furthermore, we discussed the Edge AI approach to deploying AI algorithms and models on edge devices, which are typically resource-constrained devices located at the edge of the network. We also presented the technology used in various modern IoT applications, including autonomous vehicles, smart homes, industrial automation, healthcare, and surveillance. Moreover, the discussion of leveraging machine learning algorithms optimized for resource-constrained environments is presented. Finally, important open challenges and potential research directions in the field of edge computing and edge AI have been identified and investigated. We hope that this article will serve as a common goal for a future blueprint that will unite important stakeholders and facilitates to accelerate development in the field of Edge AI.
    Edge device
    Blueprint
    Citations (140)
    We examine data-intensive real-time applications, such as forest fire detection, medical emergency services, oil pipeline monitoring, etc., that require relatively low response time in processing data from the Internet of Things (IoT) devices. Typically, in such circumstances, the edge computing paradigm is utilised to drastically reduce the processing delay of such applications. However, with the growing IoT devices, the edge device cluster needs to be configured properly such that the real-time requirements are met. Therefore, the cluster configuration must be dynamically adapted to the changing network topology of the edge cluster in order to minimise the observed overall communication delay incurred by edge devices when processing data from IoT devices. To this end, we propose an intelligent assignment of IoT devices to edge devices based on Reinforcement Learning such that communication delay is minimised and none of the edge devices is overloaded. We demonstrate, with some preliminary results, that our algorithm outperforms the state-of-the-art.
    Edge device
    Response time
    Citations (0)
    Pervasive intelligence promises to revolutionize society from Industrial Internet of Things (IIoT), to smart infrastructure and homes, to personal health monitoring. Unfortunately, many edge devices that are pervasively embedded into infrastructure or implanted into humans are severely resource-constrained. As performing computations at the edge becomes increasingly important to meet latency deadlines and retain sensitive data locally, severe resource constraints present a challenge because many algorithms are too large to fit on a single edge device. In this paper, we focus on distributing inference for neural networks (NNs) with convolution and fully connected layers across multiple edge nodes. In order to improve efficiency on severely resource-constrained edge nodes for diverse NN architectures we present an end-to-end, automated approach, DENNI, that optimizes NN distribution with minimal nodes while meeting memory constraints. When targeting a network of edge nodes with 256KB of non-volatile memory connected with Bluetooth Low Energy, DENNI successfully distributes NN inference for a variety of machine learning algorithms across multiple edge nodes where other, static approaches cannot.
    Edge device