Factorization through Latent Dirichlet Allocation

2016 
Introduction In Chapter 8, we described a bilinear latent factor model RLFM that captured user-item interactions through a multiplicative function u′ i v j ; u i and v j are unknown vectors associated with user i and item j , respectively (often referred to as latent factors). The latent factors live in a Euclidean space and are regularized with a Gaussian prior with mean determined by a regression function that is based on user and item features. This incorporates both cold-start and warmstart aspects into a single modeling framework. In this chapter, we describe a new factor model called factorized latent Dirichlet allocation (fLDA), which is suited to the task of incorporating both rich bag-of-words-type features on items and user response simultaneously to enhance predictions. Such scenarios are commonplace in web applications like content recommendation, advertising, and web search. We note that “word” in our context is a general term used to denote elements like phrases , entities , and others. We empirically show that this model provides better accuracy compared to state-of-the-art factor models for items with textual metadata that is amenable to topic modeling. As a by-product, interpretable item topics help in explaining recommendations. We also show that when rich item metadata is not available or noisy, this method is still comparable in accuracy to state-of-the-art factorization models. However, the model fitting is computationally more intensive than in RLFM. The key idea in fLDA is to let the user factors (or profiles) take values in a Euclidean space, as in RLFM, but to assign item factors through a richer prior based on LDA (Blei et al., 2003). Specifically, we model the affinity between user i and item j as, where is a multinomial probability vector representing the soft cluster membership score of item j for K different latent topics; ui represents user i 's affinity to those topics. The main idea in LDA is to attach a discrete latent factor to each word of an item that can take K different values ( K topics) and produce item topics by averaging the per-word topics in the item. Thus, a news article where 80 percent of the words are assigned to politics and the rest to education can be thought of as being an article about politics but perhaps related to an issue in education.
    • Correction
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    1
    Citations
    NaN
    KQI
    []