Sequential Recommendation with a Pre-trained Module Learning Multi-modal Information

2020 
Recommendation has already attracted increased attention in various industries, including community of scholars. Especially sequential recommendation considering time series information is improved to satisfy sequential user's needs better. Most efforts have been made in capturing users' dynamic preferences, however, the multi-modal information of items will also contribute to making recommendation. A user may be interested in a pair of shoes because of its color and style, or because of its function that is suitable for dating, so multi-modal(image, text) contains different aspects of information for items. A new three-stream framework is designed to capture user's different aspect preferences using the self-attention mechanism. The image-stream will capture user's interests about what an item looks like, the description-stream will capture user's preference about the function of an item, and the item-stream will capture the sequential preference. The key point in our method lies in that a pre-trained item presentation is utilized which has learned information from multi-modalities. Experiments are conducted by us on public datasets and the results prove that our model obtains good performance improvement over most existing sequential recommendation models.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    12
    References
    0
    Citations
    NaN
    KQI
    []