logo
    Battery degradation prediction against uncertain future conditions with recurrent neural network enabled deep learning
    140
    Citation
    45
    Reference
    10
    Related Paper
    Citation Trend
    군특성화고 정책은 양질의 군인적자원을 수급하기 위해 도입된 정책으로 학·군(學軍) 협력을 기반으로 하는 국방인적자원관리(Military HRM)의 성격을 지닌다. 이에 본 연구는 머신러닝을 활용하여 군특성화고 정책이 내재한 인적자원개발의 측면을 실증적으로 분석하고 전문병 선발 예측 모델과 중요 변수를 제시한다.BR 이를 위해 국내 군특성화고등학교 A학교의 졸업생 850여 명의 교육 및 진로 데이터의 전처리를 수행하여 50여개의 투입변수를 최종적으로 획득하였다. '전문병 선발'을 타겟변수로 선정하여 과대 표집을 통해 타겟변수의 클래스 불균형을 해소한 후 머신러닝의 예측모델을 훈련하였다.BR 전문병 선발을 정확하게 예측할 수 있는 최적 모델 수립을 위해 Random Forest, XGBoost, LightGBM, SVM, Logistic과 같은 5개 머신러닝 알고리즘을 타겟변수 클래스가 불균형한 원천 데이터와 과대표집을 시행한 과대 표집데이터에 모두 적용하여 총 10개의 모델을 훈련하였다. 모델 훈련 과정에서 층화 k-Fold 교차검증을 함께 수행하여 과적합을 예방하였고 최적 모델을 구현하는 데 적합한 초매개변수를 탐색하였다.BR 훈련 결과 Random Forest 알고리즘으로 훈련한 모델의 예측 성능이 원천 데이터 및 과대표집 데이터로 훈련한 모든 경우에서 가장 우수하였다. AUC값을 기준으로 할 때 원천 데이터로 훈련한 Random Forest(RF) 모델 성능은 0.76에 근사했고 과대표집 데이터로 훈련한 Random Forest 모델(RF_over) 성능은 0.85 수준으로 향상했다. 투입변수 중요도를 평가한 결과 50여 개 투입변수 중'면허_취득/미취득', '전공기능사' 등 전공 전문성과 관련된 변수가'전문병 선발'여부에 가장 큰 영향을 미친 것으로 나타났다.BR 추가적으로 모델의 편향성을 점검하기 위해 원천 데이터와 과대표집 데이터를 무작위로 표집하여 평가를 실시한 결과 RF와 RF_over 두 모델의 AUC 값이 모두 0.5에 수렴하는 결과를 보였다. 이는 훈련한 머신러닝 모델이 특정 변수에 의존하지 않으면서 상당한 수준의 성능을 보이는 것으로 이해할 수 있다.BR 본 연구의 결과는 머신러닝을 활용한 군특성화고 연구의 가능성을 제시할 뿐 아니라 실제 교육현장에서 군특성화고 정책의 효과성에 기여하는 요소를 특정할 수 있음을 보여준다. 이러한 결과는 군특 전문병의 원활한 선발과 수급을 위해 전공 전문성 및 교육훈련을 강화한 인적자원관리의 필요성을 제기한다. 또한 이를 통해 머신러닝을 활용한 인사이트 획득과 데이터에 기반한 전사적 국방인적자원관리의 가능성을 모색할 수 있을 것으로 기대한다.
    Recurrent neural network (RNN) has become a popular technology for automatic speech recognition (ASR). However, the vanilla RNN is difficult to train due to the problem of vanishing gradient and thus has poor performance. Some units with gate mechanism have been proposed to solve the problem, such as gated recurrent unit (GRU), long short-term memory (LSTM), projected LSTM (LSTMP), projected GRU (PGRU) and output-gated PGRU (OPGRU). In this work, we aim to evaluate the performance of above RNN units for acoustic modeling in a Mandarin ASR task. We evaluate three conditions, including unidirectional RNN, bidirectional RNN (BRNN) and time delay neural network (TDNN) - RNN. The experiments were done on Aishell-1 corpus by using Kaldi toolkit. The results show that PGRU gets the best performance on all three conditions and its model size is also smaller than that of LSTM and LSTMP.
    Aims: This work aim is to develop an enhanced predictive system for Coronary Heart Disease (CHD). Study Design: Synthetic Minority Oversampling Technique and Random Forest. Methodology: The Framingham heart disease dataset was used, which was collected from a study in Framingham, Massachusetts, the data was cleaned, normalized, rebalanced. Classifiers such as random forest, artificial neural network, naïve bayes, logistic regression, k-nearest neighbor and support vector machine were used for classification. Results: Random Forest outperformed other classifiers with an accuracy of 98%, a sensitivity of 99% and a precision of 95.8%. Feature selection was employed for better classification, but no significant improvement was recorded on the performance of the classifier with feature selection. Train test split also performed better that cross validation. Conclusion: Random Forest is recommended for research in Coronary Heart Disease prediction domain.
    Cross-validation
    In this work, we propose a novel recurrent neural network (RNN) architecture. The proposed RNN, gated-feedback RNN (GF-RNN), extends the existing approach of stacking multiple recurrent layers by allowing and controlling signals flowing from upper recurrent layers to lower layers using a global gating unit for each pair of layers. The recurrent signals exchanged between layers are gated adaptively based on the previous hidden states and the current input. We evaluated the proposed GF-RNN with different types of recurrent units, such as tanh, long short-term memory and gated recurrent units, on the tasks of character-level language modeling and Python program evaluation. Our empirical evaluation of different RNN units, revealed that in both tasks, the GF-RNN outperforms the conventional approaches to build deep stacked RNNs. We suggest that the improvement arises because the GF-RNN can adaptively assign different layers to different timescales and layer-to-layer interactions (including the top-down ones which are not usually present in a stacked RNN) by learning to gate these interactions.
    Python
    Citations (252)
    Considering deep sequence learning for practical application, two representative RNNs - LSTM and GRU may come to mind first. Nevertheless, is there no chance for other RNNs? Will there be a better RNN in the future? In this work, we propose a novel, succinct and promising RNN - Fusion Recurrent Neural Network (Fusion RNN). Fusion RNN is composed of Fusion module and Transport module every time step. Fusion module realizes the multi-round fusion of the input and hidden state vector. Transport module which mainly refers to simple recurrent network calculate the hidden state and prepare to pass it to the next time step. Furthermore, in order to evaluate Fusion RNN's sequence feature extraction capability, we choose a representative data mining task for sequence data, estimated time of arrival (ETA) and present a novel model based on Fusion RNN. We contrast our method and other variants of RNN for ETA under massive vehicle travel data from DiDi Chuxing. The results demonstrate that for ETA, Fusion RNN is comparable to state-of-the-art LSTM and GRU which are more complicated than Fusion RNN.
    Sequence (biology)
    Sensor Fusion
    Citations (3)
    In this work, we propose a novel recurrent neural network (RNN) architecture. The proposed RNN, gated-feedback RNN (GF-RNN), extends the existing approach of stacking multiple recurrent layers by allowing and controlling signals flowing from upper recurrent layers to lower layers using a global gating unit for each pair of layers. The recurrent signals exchanged between layers are gated adaptively based on the previous hidden states and the current input. We evaluated the proposed GF-RNN with different types of recurrent units, such as tanh, long short-term memory and gated recurrent units, on the tasks of character-level language modeling and Python program evaluation. Our empirical evaluation of different RNN units, revealed that in both tasks, the GF-RNN outperforms the conventional approaches to build deep stacked RNNs. We suggest that the improvement arises because the GFRNN can adaptively assign different layers to different timescales and layer-to-layer interactions (including the top-down ones which are not usually present in a stacked RNN) by learning to gate these interactions.
    Python
    Citations (426)
    In this work, we propose a novel recurrent neural network (RNN) architecture. The proposed RNN, gated-feedback RNN (GF-RNN), extends the existing approach of stacking multiple recurrent layers by allowing and controlling signals flowing from upper recurrent layers to lower layers using a global gating unit for each pair of layers. The recurrent signals exchanged between layers are gated adaptively based on the previous hidden states and the current input. We evaluated the proposed GF-RNN with different types of recurrent units, such as tanh, long short-term memory and gated recurrent units, on the tasks of character-level language modeling and Python program evaluation. Our empirical evaluation of different RNN units, revealed that in both tasks, the GF-RNN outperforms the conventional approaches to build deep stacked RNNs. We suggest that the improvement arises because the GF-RNN can adaptively assign different layers to different timescales and layer-to-layer interactions (including the top-down ones which are not usually present in a stacked RNN) by learning to gate these interactions.
    Python
    Citations (6)
    최근 인공지능(AI)과 딥러닝 발전으로 대화형 인공지능 챗봇의 중요성이 부각되고 있으며 다양한 분야에서 연구 가 진행되고 있다. 챗봇을 만들기 위해서 직접 개발해 사용하기도 하지만 개발의 용이성을 위해 오픈소스 플랫폼이나 상업용 플랫폼을 활용하여 개발한다. 이러한 챗봇 플랫폼은 주로 RNN (Recurrent Neural Network)과 응용 알고리즘 을 사용하며, 빠른 학습속도와 모니터링 및 검증의 용이성 그리고 좋은 추론 성능의 장점을 가지고 있다. 본 논문에서는 RNN과 응용 알고리즘의 추론 성능 향상방법을 연구하였다. 제안 방법은 RNN과 응용 알고리즘 적용 시 각 문장에 대한 핵심단어의 단어그룹에 대해 확장학습을 통해 데이터에 내재된 의미를 넓히는 기법을 사용하였다. 본 연구의 결과는 순환 구조를 갖는 RNN, GRU (Gated Recurrent Unit), LSTM (Long-short Term Memory) 세 알고리즘에서 최소 0.37%에 서 최대 1.25% 추론 성능향상을 달성하였다. 본 연구를 통해 얻은 연구결과는 관련 산업에서 인공지능 챗봇 도입을 가속하 고 다양한 RNN 응용 알고리즘을 활용하도록 하는데 기여할 수 있다. 향후 연구에서는 다양한 활성 함수들이 인공신경망 알고리즘의 성능 향상에 미치는 영향에 관한 연구가 필요할 것이다.
    Recent work has shown that topological enhancements to recurrent neural networks (RNNs) can increase their expressiveness and representational capacity. Two popular enhancements are stacked RNNs, which increases the capacity for learning non-linear functions, and bidirectional processing, which exploits acausal information in a sequence. In this work, we explore the delayed-RNN, which is a single-layer RNN that has a delay between the input and output. We prove that a weight-constrained version of the delayed-RNN is equivalent to a stacked-RNN. We also show that the delay gives rise to partial acausality, much like bidirectional networks. Synthetic experiments confirm that the delayed-RNN can mimic bidirectional networks, solving some acausal tasks similarly, and outperforming them in others. Moreover, we show similar performance to bidirectional networks in a real-world natural language processing task. These results suggest that delayed-RNNs can approximate topologies including stacked RNNs, bidirectional RNNs, and stacked bidirectional RNNs - but with equivalent or faster runtimes for the delayed-RNNs.
    Reservoir computing
    Sequence (biology)
    Citations (1)