Dynamical User Intention Prediction via Multi-modal Learning

2020 
Predicting the intention of users for different commodities has been receiving more and more attention in many applications, such as the decision of awarding bonus and the recommendation of commodity in E-commerce. Existing methods treat customer-to-commodity data as a flat data sequence while ignoring intrinsic multi-modality nature. Observing that different modalities (e.g., click ratios, collection, and purchase amount), as well as the elements within each modality, contribute differently to the prediction of purchasing intention. Besides, existing methods cannot handle the sparsity problem well. As a result, they cannot predict the user intention with sparse data. To address these issues, in this paper, we present a novel Dynamical User Intention Prediction via Multi-modal Learning (DUIPML) method to integrate different types of data to dynamically predict the user intention while reducing the impact of sparse data, which can be well applied on the practical bonus awarding scenario. Specifically, we firstly design a multi-modal fusion strategy to integrate different types of behavior information to obtain the initial user intention of each customer for each category. Next, we treat different clients with different preferences as two modalities and proposed a multi-modal alignment strategy to explore the latent correlations between different clients. After that, we communicate knowledge between the two clients based on the correlations to complete and enrich the user intention for each other, and thus to alleviate the issue of data sparsity. We apply the enriched user intention for the practical bonus awarding scenario on the Taobao platform in Alibaba group. Experiments on benchmark multi-modal datasets and the realistic E-commodity scenarios show that our method significantly outperforms related representative approaches both on effectiveness and adaptability.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    40
    References
    0
    Citations
    NaN
    KQI
    []