Fog computing is an emerging network paradigm. Due to its characteristics (e.g., geo-location and constrained resource), fog computing is subject to a broad range of security threats. Intrusion detection system (IDS) is an essential security technology to deal with the security threats in fog computing. We have introduced a fog computing IDS (FC-IDS) framework in our previous work. In this paper, we study the optimal intrusion response strategy in fog computing based on the FC-IDS scheme proposed in our previous work. We postulate the intrusion process in fog computing and describe it with a mathematical model based on differential game theory. According to this model, the optimal response strategy is obtained corresponding to the optimal intrusion strategy. Theoretical analysis and simulation results demonstrate that our security model can effectively stabilize the intrusion frequency of the invaders in fog computing.
Low Earth Orbit (LEO) satellites networks can provide multimedia service and plays an increasingly important role in the exploitation of space. However, one of the challenges in LEO satellites networks is that the services are suffered from high symbol error, limited storage space and limited available energy. To analyze the performance of the service in LEO satellites networks, a model, based on differential game, is proposed for satisfying the QoS requirements of multimedia applications. The controller of our model is the transmitting rate and the object is to maximize the payoff depending on the error symbol rate, the available energy, the bandwidth and the process ability so as to guarantee the QoS service. In order to solve our built model, we use the Bellman theorem to make formulas on the trance of the optimal transmitting rate. Furthermore, simulation results verify that the service can be maximized by using our derived transmitting rate trance.
Modern deep learning enabled artificial neural networks, such as Deep Neural Network (DNN) and Convolutional Neural Network (CNN), have achieved a series of breaking records on a broad spectrum of recognition applications. However, the enormous computation and storage requirements associated with such deep and complex neural network models greatly challenge their implementations on resource-limited platforms. Time-based spiking neural network has recently emerged as a promising solution in Neuromorphic Computing System designs for achieving remarkable computing and power efficiency within a single chip. However, the relevant research activities have been narrowly concentrated on the biological plausibility and theoretical learning approaches, causing inefficient neural processing and impracticable multilayer extension thus significantly limitations on speed and accuracy when handling the realistic cognitive tasks. In this work, a practical multilayer time-based spiking neuromorphic architecture, namely "MT-Spike", is developed to fill this gap. With the proposed practical time-coding scheme, average delay response model, temporal error backpropagation algorithm and heuristic loss function, "MT-Spike" achieves more efficient neural processing through flexible neural model size reduction while offering very competitive classification accuracy for realistic recognition tasks. Simulation results well validate that the algorithmic power of deep multilayer learning can be seamlessly merged with the efficiency of time-based spiking neuromorphic architecture, demonstrating great potentials of "MT-Spike" in resource and power constrained embedded platforms.
Whether KAD IDs are uniformly distributed in the KAD network is a very important point, for it can influence the structure of the routing table. Further influence the routing performance. In this paper, two methodologies are used to verify our assumption that KAD IDs do uniformly distributed. The first one is in XOR space. We made the distribution of the XOR-distance; the second methodology we used is in Euclidean space. We made the distribution of the Euclidean-distance. The outcome is that KAD IDs follow exponential distribution. Lastly we got a suitable model to fit the actual situation.
A novel hybrid human action detection method based on three descriptors is proposed. Firstly, the minimal 3D space region of human action region is detected by combining frame difference method and VIBE algorithm, and the threedimensional histogram of oriented gradient (HOG3D) is extracted. At the same time, the characteristics of three dimensional global descriptors based on frequency domain filtering (FDF) and the local descriptors based on spatialtemporal interest points (STIP) are extracted. Principal component analysis (PCA) is implemented to reduce the dimension of the gradient histogram and the global descriptor, and bag of words (BOW) model is applied to describe the local descriptor based on STIP to make the video feature dimension consistency. Finally, according to the three characteristics, a linear support vector machine (SVM) is used to create a new decision level fusion classifier, which is used for effective analysis of multi class action. Experimental results show that the proposed feature descriptor has good representation ability and generalization ability. And the proposed scheme obtains very competitive results on the wellknown datasets in terms of mean average precision.
Fog Computing is a new platform that can serve mobile devices in the local area. In Fog Computing, the resources need to be shared or cached in the widely deployed Fog clusters. In this paper, we propose a Steiner tree based caching scheme, in which the Fog servers, when caching resources, first produce a Steiner tree to minimize the total path weight (or cost) such that the cost of resource caching using this tree could be minimized. Then we give a running illustration to show how the Fog Computing works and we compare the traditional shortest path scheme with the proposed one. The outcome shows that the Steiner tree based scheme could work more efficiently.