Trajectory Forecasting Based on Prior-Aware Directed Graph Convolutional Neural Network

2022 
Predicting the motion trajectories of moving agents in complex traffic scenes, such as crossroads and roundabouts, plays an important role in cooperative intelligent transportation systems. Nevertheless, accurately forecasting the motion behavior in a dynamic scenario is challenging due to the complex cooperative interactions between moving agents. Graph Convolutional Neural Network has recently been employed to deal with the cooperative interactions between agents. Despite the promising performance of resulting trajectory prediction algorithms, many existing graph-based approaches model interactions with an undirected graph, where the strength of influence between agents is assumed to be symmetric. However, such an assumption often does not hold in reality. For example, in pedestrian or vehicle interaction modeling, the moving behavior of a pedestrian or vehicle is highly affected by the ones ahead, while the ones ahead usually pay less attention to the ones behind. To fully exploit the asymmetric attributes of the cooperative interactions in intelligent transportation systems, in this work, we present a directed graph convolutional neural network for multiple agents trajectory prediction. First, we propose three directed graph topologies, i.e., view graph, direction graph, and rate graph, by encoding different prior knowledge of a cooperative scenario, which endows the capability of our framework to effectively characterize the asymmetric influence between agents. Then, a fusion mechanism is devised to jointly exploit the asymmetric mutual relationships embedded in constructed graphs. Furthermore, a loss function based on Cauchy distribution is designed to generate multimodal trajectories. Experimental results on complex traffic scenes demonstrate the superior performance of our proposed model when compared with existing approaches.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    42
    References
    0
    Citations
    NaN
    KQI
    []