This paper presents a particle filtering algorithm for multiple object tracking. The proposed particle filter (PF) embeds a data association technique based on the joint probabilistic data association (JPDA) which handles the uncertainty of the measurement origin.
Internet of Things (IoT) facilitates the integration between objects and different sensors to provide communication among them without human intervention. However, the extensive demand for IoT and its various applications has continued to grow, coupled with the need to achieve foolproof security requirements. IoT produces a vast amount of data under several constraints such as low processor, power, and memory. These constraints, along with the invaluable data produced by IoT devices, make IoT vulnerable to various security attacks. This paper presents an overview of IoT, its well-known system architecture, enabling technologies, and discusses security challenges and goals. Furthermore, we analyze security vulnerabilities and provide state-of-the-art security taxonomy. The taxonomy of the most relevant and current IoT security attacks is presented for application, network, and physical layers. While most other surveys studied one of the areas of security measures, this study considers and reports on the most advanced security countermeasures within the areas of autonomic, encryption, and learning-based approaches. Additionally, we uncover security challenges that may be met by the research community regarding security implementation in heterogeneous IoT environment. Finally, we provide different visions about possible security solutions and future research directions.
As the range of security attacks increases across diverse network applications, intrusion detection systems are of central interest. Such detection systems are more crucial for the Internet of Things (IoT) due to the voluminous and sensitive data it produces. However, the real-world network produces imbalanced traffic including different and unknown attack types. Due to this imbalanced nature of network traffic, the traditional learning-based detection techniques suffer from lower overall detection performance, higher false-positive rate, and lower minority-class attack detection rates. To address the issue, we propose a novel deep generative-based model called Class-wise Focal Loss Variational AutoEncoder (CFLVAE) which overcomes the data imbalance problem by generating new samples for minority attack classes. Furthermore, we design an effective and cost-sensitive objective function called Class-wise Focal Loss (CFL) to train the traditional Variational AutoEncoder (VAE). The CFL objective function focuses on different minority class samples and scrutinizes high-level feature representation of observed data. This leads the VAE to generate more realistic, diverse, and quality intrusion data to create a well-balanced intrusion dataset. The balanced dataset results in improving the intrusion detection accuracy of learning-based classifiers. Therefore, a Deep Neural Network (DNN) classifier with a unique architecture is then trained using the balanced intrusion dataset to enhance the detection performance. Moreover, we utilize a challenging and highly imbalanced intrusion dataset called NSL-KDD to conduct an extensive experiment with the proposed model. The results demonstrate that the proposed CFLVAE with DNN (CFLVAE-DNN) model obtains promising performance in generating realistic new intrusion data samples and achieves superior intrusion detection performance. Additionally, the proposed CFLVAE-DNN model outperforms several state-of-the-art data generation and traditional intrusion detection methods. Specifically, the CFLVAE-DNN achieves 88.08% overall intrusion detection accuracy and 3.77% false positive rate. More significantly, it obtains the highest low-frequency attack detection rates for U2R (79.25%) and R2L (67.5%) against all the state-of-the-art algorithms.
This paper presents an adaptive code-aided technique for the suppression of narrowband interference (NBI) in direct-sequence code-division multiple access (DS-CDMA) systems. This technique uses a multiuser detector based on the interacting multiple model (BOA) algorithm. This detector is based on the concept that the effective model of the received signal at a time instance can be approximated by one of the models in the IMM algorithm. Simulations are used to compare the performance of the proposed technique with that of the recursive least squares (RLS) version of the minimum mean-square error (MMSE) for multiuser detection.
A key issue regarding feature extraction is the capability of a technique to extract distinctive features to represent facial expressions while requiring a low computational complexity. In this study, the authors propose a novel approach for appearance‐based facial feature extraction to perform the task of facial expression recognition on video sequences. The proposed spatiotemporal texture map (STTM) is capable of capturing subtle spatial and temporal variations of facial expressions with low computational complexity. First, face is detected using Viola–Jones face detector and frames are cropped to remove unnecessary background. Facial features are then modelled with the proposed STTM, which uses the spatiotemporal information extracted from three‐dimensional Harris corner function. A block‐based method is adopted to extract the dynamic features and represent the features in the form of histograms. The features are then classified into classes of emotion by the support vector machine classifier. The experimental results demonstrate that the proposed approach shows superior performance compared with the state‐of‐the‐art approaches with an average recognition rate of 95.37, 98.56, and 84.52% on datasets containing posed expressions, spontaneous micro‐expressions, and close‐to‐real‐world expressions, respectively. They also show that the proposed algorithm requires low computational cost.
Nonlinear distributed tracking for a single target is addressed in this paper. This problem consists of tracking a target of interest while moving the sensors to `best' positions according to an critera appropriate for the problem. Both target tracking and manoeuvring of sensors are carried out jointly using a novel Sequential Monte Carlo technique. The proposed technique is illustrated using a bearing-only problem and simulations are used to compare the performance of the proposed technique with distributed tracking using fixed sensors.
Sign language recognition using computer vision techniques enables machines to function as interpreters of sign language while eliminating the need for cumbersome data gloves. In this paper, a robust approach for recognition of bare-handed static sign language is presented, using a novel combination of features. These include Local Binary Patterns (LBP) histogram features based on color and depth information, and also geometric features of the hand. Linear binary Support Vector Machine (SVM) classifiers are used for recognition, coupled with template matching in the case of multiple matches. An accurate hand segmentation scheme using the Kinect depth sensor is also presented. The resulting sign language recognition system could be employed in many practical scenarios and works in complex environments in real-time. It is also shown to be robust to changes in distance between the user and camera and can handle possible variations in fingerspelling among different users. The algorithm is tested on two ASL fingerspelling datasets where overall classification rates over 90% are observed.
Land Mobile Radios (LMRs) are a two-way consumer radio communication system, popularly used for public safety operations. An unintentional strong far-out interfering signal causes the LMR receiver to be overloaded and reduces the gain of the weak desired signal. The conventional non-learning based methods to mitigate the effects of interference require prior knowledge of the interferer or additional filtering components at the RF front-end of the receiver. In this paper, we propose a novel data-driven unsupervised Deep Learning-based approach for joint interference detection, interference cancellation and signal detection of narrowband LMR signals that we refer to as DeepLMR. The DeepLMR uses a Variational Autoencoder (VAE)-based framework known as Recovery VAE (Re-VAE), with a Gumbel-Softmax distribution that encodes the input to a lower dimensional representation as the latent space representations. The latent space representations are sampled from a categorical distribution and classified to the corresponding symbols of the transmitted signal. Experimental results with real-world signals distorted by a strong far-out interfering signal showed that our proposed DeepLMR architecture has bit error rate (BER) performance improvements as compared to the conventional frequency discriminator and other state-of-the-art Deep Learning-based architectures.
Contour tracking for a single source emission is addressed in this paper. This problem is solved by estimating the contour boundary positions using a set of particle filters. The use of Sequential Monte Carlo techniques enables the tracking to performed when the measurements are noisy and the tracking results also includes the estimation uncertainty. The proposed technique is illustrated for a SCIPUFF generated single emission scenario and simulation experiments showed the successful tracking throughout the tracking period.