logo
    Probabilistic Trajectory Prediction for Autonomous Vehicles with Attentive Recurrent Neural Process
    10
    Citation
    31
    Reference
    10
    Related Paper
    Citation Trend
    Abstract:
    Predicting surrounding vehicle behaviors are critical to autonomous vehicles when negotiating in multi-vehicle interaction scenarios. Most existing approaches require tedious training process with large amounts of data and may fail to capture the propagating uncertainty in interaction behaviors. The multi-vehicle behaviors are assumed to be generated from a stochastic process. This paper proposes an attentive recurrent neural process (ARNP) approach to overcome the above limitations, which uses a neural process (NP) to learn a distribution of multi-vehicle interaction behavior. Our proposed model inherits the flexibility of neural networks while maintaining Bayesian probabilistic characteristics. Constructed by incorporating NPs with recurrent neural networks (RNNs), the ARNP model predicts the distribution of a target vehicle trajectory conditioned on the observed long-term sequential data of all surrounding vehicles. This approach is verified by learning and predicting lane-changing trajectories in complex traffic scenarios. Experimental results demonstrate that our proposed method outperforms previous counterparts in terms of accuracy and uncertainty expressiveness. Moreover, the meta-learning instinct of NPs enables our proposed ARNP model to capture global information of all observations, thereby being able to adapt to new targets efficiently.
    In this work, we propose a novel recurrent neural network (RNN) architecture. The proposed RNN, gated-feedback RNN (GF-RNN), extends the existing approach of stacking multiple recurrent layers by allowing and controlling signals flowing from upper recurrent layers to lower layers using a global gating unit for each pair of layers. The recurrent signals exchanged between layers are gated adaptively based on the previous hidden states and the current input. We evaluated the proposed GF-RNN with different types of recurrent units, such as tanh, long short-term memory and gated recurrent units, on the tasks of character-level language modeling and Python program evaluation. Our empirical evaluation of different RNN units, revealed that in both tasks, the GF-RNN outperforms the conventional approaches to build deep stacked RNNs. We suggest that the improvement arises because the GF-RNN can adaptively assign different layers to different timescales and layer-to-layer interactions (including the top-down ones which are not usually present in a stacked RNN) by learning to gate these interactions.
    Python
    Citations (252)
    Recurrent Neural Network (RNN) and its variations such as Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU), have become standard building blocks for learning online data of sequential nature in many research areas, including natural language processing and speech data analysis. In this paper, we present a new methodology to significantly reduce the number of parameters in RNNs while maintaining performance that is comparable or even better than classical RNNs. The new proposal, referred to as Restricted Recurrent Neural Network (RRNN), restricts the weight matrices corresponding to the input data and hidden states at each time step to share a large proportion of parameters. The new architecture can be regarded as a compression of its classical counterpart, but it does not require pre-training or sophisticated parameter fine-tuning, both of which are major issues in most existing compression techniques. Experiments on natural language modeling show that compared with its classical counterpart, the restricted recurrent architecture generally produces comparable results at about 50% compression rate. In particular, the Restricted LSTM can outperform classical RNN with even less number of parameters.
    In this work, we propose a novel recurrent neural network (RNN) architecture. The proposed RNN, gated-feedback RNN (GF-RNN), extends the existing approach of stacking multiple recurrent layers by allowing and controlling signals flowing from upper recurrent layers to lower layers using a global gating unit for each pair of layers. The recurrent signals exchanged between layers are gated adaptively based on the previous hidden states and the current input. We evaluated the proposed GF-RNN with different types of recurrent units, such as tanh, long short-term memory and gated recurrent units, on the tasks of character-level language modeling and Python program evaluation. Our empirical evaluation of different RNN units, revealed that in both tasks, the GF-RNN outperforms the conventional approaches to build deep stacked RNNs. We suggest that the improvement arises because the GFRNN can adaptively assign different layers to different timescales and layer-to-layer interactions (including the top-down ones which are not usually present in a stacked RNN) by learning to gate these interactions.
    Python
    Citations (426)
    Abstract: In this paper, we explore different ways to extend a recurrent neural network (RNN) to a \textit{deep} RNN. We start by arguing that the concept of depth in an RNN is not as clear as it is in feedforward neural networks. By carefully analyzing and understanding the architecture of an RNN, however, we find three points of an RNN which may be made deeper; (1) input-to-hidden function, (2) hidden-to-hidden transition and (3) hidden-to-output function. Based on this observation, we propose two novel architectures of a deep RNN which are orthogonal to an earlier attempt of stacking multiple recurrent layers to build a deep RNN (Schmidhuber, 1992; El Hihi and Bengio, 1996). We provide an alternative interpretation of these deep RNNs using a novel framework based on neural operators. The proposed deep RNNs are empirically evaluated on the tasks of polyphonic music prediction and language modeling. The experimental result supports our claim that the proposed deep RNNs benefit from the depth and outperform the conventional, shallow RNNs.
    Feedforward neural network
    Feed forward
    Citations (613)
    In this work, we propose a novel recurrent neural network (RNN) architecture. The proposed RNN, gated-feedback RNN (GF-RNN), extends the existing approach of stacking multiple recurrent layers by allowing and controlling signals flowing from upper recurrent layers to lower layers using a global gating unit for each pair of layers. The recurrent signals exchanged between layers are gated adaptively based on the previous hidden states and the current input. We evaluated the proposed GF-RNN with different types of recurrent units, such as tanh, long short-term memory and gated recurrent units, on the tasks of character-level language modeling and Python program evaluation. Our empirical evaluation of different RNN units, revealed that in both tasks, the GF-RNN outperforms the conventional approaches to build deep stacked RNNs. We suggest that the improvement arises because the GF-RNN can adaptively assign different layers to different timescales and layer-to-layer interactions (including the top-down ones which are not usually present in a stacked RNN) by learning to gate these interactions.
    Python
    Citations (6)
    최근 인공지능(AI)과 딥러닝 발전으로 대화형 인공지능 챗봇의 중요성이 부각되고 있으며 다양한 분야에서 연구 가 진행되고 있다. 챗봇을 만들기 위해서 직접 개발해 사용하기도 하지만 개발의 용이성을 위해 오픈소스 플랫폼이나 상업용 플랫폼을 활용하여 개발한다. 이러한 챗봇 플랫폼은 주로 RNN (Recurrent Neural Network)과 응용 알고리즘 을 사용하며, 빠른 학습속도와 모니터링 및 검증의 용이성 그리고 좋은 추론 성능의 장점을 가지고 있다. 본 논문에서는 RNN과 응용 알고리즘의 추론 성능 향상방법을 연구하였다. 제안 방법은 RNN과 응용 알고리즘 적용 시 각 문장에 대한 핵심단어의 단어그룹에 대해 확장학습을 통해 데이터에 내재된 의미를 넓히는 기법을 사용하였다. 본 연구의 결과는 순환 구조를 갖는 RNN, GRU (Gated Recurrent Unit), LSTM (Long-short Term Memory) 세 알고리즘에서 최소 0.37%에 서 최대 1.25% 추론 성능향상을 달성하였다. 본 연구를 통해 얻은 연구결과는 관련 산업에서 인공지능 챗봇 도입을 가속하 고 다양한 RNN 응용 알고리즘을 활용하도록 하는데 기여할 수 있다. 향후 연구에서는 다양한 활성 함수들이 인공신경망 알고리즘의 성능 향상에 미치는 영향에 관한 연구가 필요할 것이다.
    Recent work has shown that topological enhancements to recurrent neural networks (RNNs) can increase their expressiveness and representational capacity. Two popular enhancements are stacked RNNs, which increases the capacity for learning non-linear functions, and bidirectional processing, which exploits acausal information in a sequence. In this work, we explore the delayed-RNN, which is a single-layer RNN that has a delay between the input and output. We prove that a weight-constrained version of the delayed-RNN is equivalent to a stacked-RNN. We also show that the delay gives rise to partial acausality, much like bidirectional networks. Synthetic experiments confirm that the delayed-RNN can mimic bidirectional networks, solving some acausal tasks similarly, and outperforming them in others. Moreover, we show similar performance to bidirectional networks in a real-world natural language processing task. These results suggest that delayed-RNNs can approximate topologies including stacked RNNs, bidirectional RNNs, and stacked bidirectional RNNs - but with equivalent or faster runtimes for the delayed-RNNs.
    Reservoir computing
    Sequence (biology)
    Citations (1)
    重合反応器内の状態変化を長期に渡って予測することを目的とし, ニューラルネットワーク (NN) を用いて行う方法について検討した.スチレン連続塊状重合プロセスにおける反応器出口温度の変化を予測対象とし, 階層型NNとリカレントNN (RNN) を用いた長期予測方法について, RNNの構造上の問題点を明らかにし, ネットワークの隠れ層による処理法に関する2種類の改良方法 (隠れユニットでの余分の処理を付加したH-RNNと, 階層型NNの隠れユニット計算機構をモジュールとして付加したM-RNN) を提案し, 各方法の予測性能を比較評価することができた.RNNの改良法を使用することにより, 階層型NNと比較して初期段階の予測および極値を持つ変化の予測の性能を改善することができた.特に, 初期段階の予測の精度を上げるために改良され, 階層型NNの隠れ層とRNNを融合した構造を持つM-RNNの予測性能は, 予測前半は勿論, 長期予測の全体に渡り満足できるものであった.