Fast Tensor Factorization for Large-scale Context-aware Recommendation from Implicit Feedback

2018 
This paper presents a fast Tensor Factorization (TF) algorithm for context-aware recommendation from implicit feedback. For such a recommendation problem, the observed data indicate the (positive) association between users and items in some given contexts. For better accuracy, it has been shown essential to include unobserved data that indicate the negative user-item-context associations. As such unobserved data greatly outnumber the observed ones, for efficiency existing algorithms usually use only a small part of the unobserved data for model training. We show in this paper that it is possible, and beneficial, to use all the unobserved data in training a TF based context-aware recommender system. This is achieved by two technical innovations. First, we scrutinize the matrix computation of the closed-form solution and accelerate the computation by memorizing the repetitive computation. Second, we further boost the generalization and scalability by dropping out some pairwise interactions when updating user, item or context factors to prevent overfitting and to reduce the training time. The resulting whole-data based learning algorithm, referred to as DropTF in the paper, is efficient and scale well. Our evaluation on two small benchmark datasets and a million-scale large dataset demonstrates improved accuracy over some existing algorithms for context-aware recommendation.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    40
    References
    4
    Citations
    NaN
    KQI
    []