Data Poisoning Attacks on Graph Convolutional Matrix Completion

2019 
Recommender systems have been widely adopted in many web services. As the performance of the recommender system will directly affect the profitability of the business, driving bad merchants to boost revenue for themselves by conducting adversarial attacks to compromise the effectiveness of such systems. Several studies have shown that recommender systems are vulnerable to adversarial attacks, e.g. data poisoning attack. Since different recommender systems adopt different algorithms, existing attacks are designed for specific systems. In recent years, with the development of graph deep learning, recommender systems have been also starting to use new methods, like graph convolutional networks. More recently, graph convolutional networks have also been found to be affected by poisoning attacks. However, characteristics of data sources in recommender systems, such as heterogeneity of nodes and edges, will bring challenge to solve attack problem. To overcome this challenge, in this paper, we propose data poisoning attacks on graph convolutional matrix completion (GCMC) recommender system by adding fake users. The key point of the method is to make fake users mimicrking rating behavior of normal users, then pass the information of thier rating behaviors towards the target item back to related normal users, attempting to interfere with the prediction of the recommender system. Futhermore, on two real-world datasets ML-100K and Flixster, the results show that our method significantly overmatches three baseline methods: (i) random attack, (ii) popular item based attack, (iii) and mimicry with random scores based attack.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    20
    References
    3
    Citations
    NaN
    KQI
    []