An Experimental Evaluation of Content-based Recommendation Systems: Can Linked Data and BERT Help?

2020 
Content-Based Recommendation Systems suggest items (e.g., articles, products, objects, services, or places) that are relevant to the user based on the features describing the items. In many content-based recommendation systems we can find, along with discrete attributes, textual features (e.g., text summaries or comments) obtained from web pages, news articles, etc. Traditionally, to enable its exploitation, the textual information of items is represented by using basic information retrieval models (such as the vector space model), which do not take into account natural language challenges involving the semantics of the words (synonymy, polysemy and hiperonymy, etc.) or language understanding. Other solutions try to exploit those semantics. In this paper, we present an experimental evaluation where we compare several recommendation approaches, including a content-based recommender based on vector space models, a deep learning and content-based recommendation approach, and a semantic-aware content-based recommendation model. This last approach exploits textual features of items obtained from the Linked Open Data (LOD) and BERT (Bidirectional Encoder Representations from Transformers) for language modelling. Deep Learning transformers are achieving good results in different NLP (Natural Language Processing) problems, but using them to build content-based recommendation systems has not been explored in depth so far. Our experimental results, focused on the domain of movie recommendations, show that a approach based on the use of BERT can provide good results if enough training data are available.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    23
    References
    1
    Citations
    NaN
    KQI
    []