Lightweight and Secure Deep Learning-based Mobile Recommender Systems

2021 
As mobile devices, such as smartphones and wearable watches, gain popularity and become an indispensable part of our daily life, people interact with extensive mobile applications every day for different purposes. While there are a lot of similar mobile applications in the same category that need to intensely compete with each other to stand out, the performance of their personalized recommender system is considered as a major factor in their success. On the other hand, deep learning (DL) has been a trending technique for developing recommender systems recently due to their outstanding effectiveness. As a result, most state-of-the-art recommender systems in real-world scenarios are already deep learning-based. To deliver personalized results from DL-based recommender systems (DLRSs) to users of mobile devices, the cloud-based paradigm is currently the most commonly adopted solution in real-world scenarios. In this paradigm, the DLRSs are first trained and deployed on powerful cloud servers, and then generate recommendation results to mobile users on demand. However, DLRSs running within this cloud-based paradigm are subject to network bottlenecks, privacy issues and energy consumption. Therefore, instead of the cloud-based paradigm, we propose to develop lightweight DLRSs that can securely run or even be trained on mobile devices locally to avoid those problems. This thesis identifies several critical challenges of running such novel local DLRS services and proposes corresponding novel solutions, which are three-fold. First, a well-known drawback of DLRSs is that they are large and have many more parameters than conventional machine learning methods, hence they are memory- and computation-intensive. However, mobile devices are resource-constrained, i.e., they lack abundant computation and storage capacities to support running or training such complex large models. Therefore, it is expected that the size of DLRSs should be compatible with limited memory space and the complexity of DLRSs should suit the computation power. Second, existing approaches to the DLRS training process require mobile users to upload their sensitive personal data, such as check-in history data, to servers. As awareness of personal privacy rises rapidly for both mobile users and the public, obtaining user data and storing it in servers is getting difficult. It is practical to devise a privacy-by-design solution to train DLRSs while letting mobile users keep their own data. Third, deep pure collaborative filtering (CF) models, as representative DLRSs, use simple neural structures and are only dependent on large-scale user-item interaction data to achieve good performance, which make them quite efficient and suitable to run on mobile devices. However, mobile recommender systems are known to suffer from data sparsity since most mobile users are subject to access constraints (such as physical constraints or time constraints) and can only visit limited items. Existing studies addressing this problem are neither efficient nor generic because they tend to incorporate the user/item features available and then develop ad-hoc models. The contributions of this thesis are made to address the aforementioned challenges and comprise of: (i) methodologies to compress DLRSs to fit in resource-constrained mobile devices and a knowledge distillation-based training algorithm to substantially improve the compressed model performance, which are applied to a next point-of-interest (POI) DLRS, (ii) a federated learning-based DLRS that moves the training from cloud servers to local devices to protect users' privacy, and a differential privacy-based mechanism to defend against potential malicious participating users, and (iii) a generic and end-to-end collaborated model to efficiently and effectively alleviate the data sparsity problem by generating reliable augmentation to the user-item interaction datasets.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []