logo
    Prediction of Short-Term Photovoltaic Power Via Self-Attention-Based Deep Learning Approach
    17
    Citation
    41
    Reference
    10
    Related Paper
    Citation Trend
    Abstract:
    Abstract Photovoltaic (PV) is characterized by random and intermittent. As increasing popularity of PV, it makes PV power prediction increasingly significant for efficiency and stability of the power grid. At present, prediction models of PV power based on deep learning show superior performance, but they ignore the interdependent mechanism of prediction error along the input characteristics of the neural network. This paper proposed a self-attention mechanism (SAM)-based hybrid one-dimensional convolutional neural network (1DCNN) and long short-term memory (LSTM) combined method (named 1DCNN-LSTM-SAM). In the proposed model, SAM redistributes the neural weights in 1DCNN-LSTM, and then 1DCNN-LSTM further extracts the space-time information of effective PV power. The polysilicon PV arrays data in Australia are employed to test and verify the proposed model and other five competition models. The results show that the application of SAM to 1DCNN-LSTM improves the ability to capture the global dependence between inputs and outputs in the learning process and the long-distance dependence of its sequence. In addition, mean absolute percentage error of the 1DCNN-LSTM-SAM under sunny day, partially cloudy day, and cloudy day weather types has increased by 24.2%, 14.4%, and 18.3%, respectively, compared with the best model among the five models. Furthermore, the weight distribution mechanism of self-attention to the back end of LSTM was analyzed quantitatively and the superiority of SAM was verified.
    Deep learning has been very successful in dealing with big data from various fields of science and engineering. It has brought breakthroughs using various deep neural network architectures and structures according to different learning tasks. An important family of deep neural networks are deep convolutional neural networks. We give a survey for deep convolutional neural networks induced by 1‐D or 2‐D convolutions. We demonstrate how these networks are derived from convolutional structures, and how they can be used to approximate functions efficiently. In particular, we illustrate with explicit rates of approximation that in general deep convolutional neural networks perform at least as well as fully connected shallow networks, and they can outperform fully connected shallow networks in approximating radial functions when the dimension of data is large.
    Deep Neural Networks
    Citations (9)
    In this work, we propose a novel recurrent neural network (RNN) architecture. The proposed RNN, gated-feedback RNN (GF-RNN), extends the existing approach of stacking multiple recurrent layers by allowing and controlling signals flowing from upper recurrent layers to lower layers using a global gating unit for each pair of layers. The recurrent signals exchanged between layers are gated adaptively based on the previous hidden states and the current input. We evaluated the proposed GF-RNN with different types of recurrent units, such as tanh, long short-term memory and gated recurrent units, on the tasks of character-level language modeling and Python program evaluation. Our empirical evaluation of different RNN units, revealed that in both tasks, the GF-RNN outperforms the conventional approaches to build deep stacked RNNs. We suggest that the improvement arises because the GF-RNN can adaptively assign different layers to different timescales and layer-to-layer interactions (including the top-down ones which are not usually present in a stacked RNN) by learning to gate these interactions.
    Python
    Citations (252)
    Fire detection using computer vision techniques and image processing has been a topic of interest among the researchers. Indeed, good accuracy of computer vision techniques can outperform traditional models of fire detection. However, with the current advancement of the technologies, such models of computer vision techniques are being replaced by deep learning models such as Convolutional Neural Networks (CNN). However, many of the existing research has only been assessed on balanced datasets, which can lead to the unsatisfied results and mislead real-world performance as fire is a rare and abnormal real-life event. Also, the result of traditional CNN shows that its performance is very low, when evaluated on imbalanced datasets. Therefore, this paper proposes use of transfer learning that is based on deep CNN approach to detect fire. It uses pre-trained deep CNN architecture namely VGG, and MobileNet for development of fire detection system. These deep CNN models are tested on imbalanced datasets to imitate real world scenarios. The results of deep CNNs models show that these models increase accuracy significantly and it is observed that deep CNNs models are completely outperforming traditional Convolutional Neural Networks model. The accuracy of MobileNet is roughly the same as VGGNet, however, MobileNet is smaller in size and faster than VGG.
    Transfer of learning
    Deep Neural Networks
    Purpose The paper aims to address the tracking algorithm based on deep learning and four deep learning tracking models developed. They compared with each other to prevent collision and to obtain target tracking in autonomous aircraft. Design/methodology/approach First, to follow the visual target, the detection methods were used and then the tracking methods were examined. Here, four models (deep convolutional neural networks (DCNN), deep convolutional neural networks with fine-tuning (DCNNFN), transfer learning with deep convolutional neural network (TLDCNN) and fine-tuning deep convolutional neural network with transfer learning (FNDCNNTL)) were developed. Findings The training time of DCNN took 9 min 33 s, while the accuracy percentage was calculated as 84%. In DCNNFN, the training time of the network was calculated as 4 min 26 s and the accuracy percentage was 91%. The training of TLDCNN) took 34 min and 49 s and the accuracy percentage was calculated as 95%. With FNDCNNTL, the training time of the network was calculated as 34 min 33 s and the accuracy percentage was nearly 100%. Originality/value Compared to the results in the literature ranging from 89.4% to 95.6%, using FNDCNNTL, better results were found in the paper.
    Transfer of learning
    Tracking (education)
    Citations (16)
    Considering deep sequence learning for practical application, two representative RNNs - LSTM and GRU may come to mind first. Nevertheless, is there no chance for other RNNs? Will there be a better RNN in the future? In this work, we propose a novel, succinct and promising RNN - Fusion Recurrent Neural Network (Fusion RNN). Fusion RNN is composed of Fusion module and Transport module every time step. Fusion module realizes the multi-round fusion of the input and hidden state vector. Transport module which mainly refers to simple recurrent network calculate the hidden state and prepare to pass it to the next time step. Furthermore, in order to evaluate Fusion RNN's sequence feature extraction capability, we choose a representative data mining task for sequence data, estimated time of arrival (ETA) and present a novel model based on Fusion RNN. We contrast our method and other variants of RNN for ETA under massive vehicle travel data from DiDi Chuxing. The results demonstrate that for ETA, Fusion RNN is comparable to state-of-the-art LSTM and GRU which are more complicated than Fusion RNN.
    Sequence (biology)
    Sensor Fusion
    Citations (3)
    In this work, we propose a novel recurrent neural network (RNN) architecture. The proposed RNN, gated-feedback RNN (GF-RNN), extends the existing approach of stacking multiple recurrent layers by allowing and controlling signals flowing from upper recurrent layers to lower layers using a global gating unit for each pair of layers. The recurrent signals exchanged between layers are gated adaptively based on the previous hidden states and the current input. We evaluated the proposed GF-RNN with different types of recurrent units, such as tanh, long short-term memory and gated recurrent units, on the tasks of character-level language modeling and Python program evaluation. Our empirical evaluation of different RNN units, revealed that in both tasks, the GF-RNN outperforms the conventional approaches to build deep stacked RNNs. We suggest that the improvement arises because the GF-RNN can adaptively assign different layers to different timescales and layer-to-layer interactions (including the top-down ones which are not usually present in a stacked RNN) by learning to gate these interactions.
    Python
    Citations (6)
    최근 인공지능(AI)과 딥러닝 발전으로 대화형 인공지능 챗봇의 중요성이 부각되고 있으며 다양한 분야에서 연구 가 진행되고 있다. 챗봇을 만들기 위해서 직접 개발해 사용하기도 하지만 개발의 용이성을 위해 오픈소스 플랫폼이나 상업용 플랫폼을 활용하여 개발한다. 이러한 챗봇 플랫폼은 주로 RNN (Recurrent Neural Network)과 응용 알고리즘 을 사용하며, 빠른 학습속도와 모니터링 및 검증의 용이성 그리고 좋은 추론 성능의 장점을 가지고 있다. 본 논문에서는 RNN과 응용 알고리즘의 추론 성능 향상방법을 연구하였다. 제안 방법은 RNN과 응용 알고리즘 적용 시 각 문장에 대한 핵심단어의 단어그룹에 대해 확장학습을 통해 데이터에 내재된 의미를 넓히는 기법을 사용하였다. 본 연구의 결과는 순환 구조를 갖는 RNN, GRU (Gated Recurrent Unit), LSTM (Long-short Term Memory) 세 알고리즘에서 최소 0.37%에 서 최대 1.25% 추론 성능향상을 달성하였다. 본 연구를 통해 얻은 연구결과는 관련 산업에서 인공지능 챗봇 도입을 가속하 고 다양한 RNN 응용 알고리즘을 활용하도록 하는데 기여할 수 있다. 향후 연구에서는 다양한 활성 함수들이 인공신경망 알고리즘의 성능 향상에 미치는 영향에 관한 연구가 필요할 것이다.
    The suggested study's objectives are to develop an unique criterion-based method for classifying RBC pictures and to increase classification accuracy by utilizing Deep Convolutional Neural Networks instead of Conventional CNN Algorithm. Materials and Procedures A dataset-master image dataset of 790 pictures is used to apply Deep Convolutional Neural Network. Convolutional Neural Network and Deep Convolutional Neural Network comparison using deep learning has been suggested and developed to improve classification accuracy of RBC pictures. Using Gpower, the sample size was calculated to be 27 for each group. Results: When compared to Convolutional Neural Network, Deep Convolutional Neural Network had the highest accuracy in classifying blood cell pictures (95.2%) and the lowest mean error (85.8 percent). Between the classifiers, there is a statistically significant difference of p=0.005. The study demonstrates that Deep Convolutional Neural Networks perform more accurately than Conventional Neural Networks while classifying photos of blood cells[1].
    Convolution (computer science)
    Deep learning is now an active research area. Deep learning has done a success in computer vision and image recognition. It is a subset of the Machine Learning. In Deep learning, Convolutional Neural Network (CNN) is popular deep neural network approach. In this paper, we have addressed that how to extract useful leaf features automatically from the leaf dataset through Convolutional Neural Networks (CNN) using Deep Learning. In this paper, we have shown that the accuracy obtained by CNN approach is efficient when compared to accuracy obtained by the traditional neural network.
    Deep Neural Networks
    Citations (7)