Heterogeneous Fusion of Semantic and Collaborative Information for Visually-Aware Food Recommendation

2020 
Visually-aware food recommendation recommends food items based on their visual features. Existing methods typically use the pre-extracted visual features from food classification models, which mainly encode the visual content with limited semantic information, such as the classes and ingredients. Therefore, such features may not cover the personalized visual preferences of users, termed collaborative information, e.g. users may attend to different colors and textures of food based on their preferred ingredients and cooking methods. To address this problem, this paper presents a heterogeneous multi-task learning framework, termed privileged-channel infused network (PiNet). It learns the visual features that contain both the semantic and collaborative information by training the image encoder to simultaneously fulfill the ingredient prediction and food recommendation tasks. However, the heterogeneity between the two tasks may lead to different visual information in need and different directions in model parameter optimization. To handle these challenges, PiNet first employs a dual-gating module (DGM) to enable the encoding and passing of different visual information from the image encoder to individual tasks. Secondly, PiNet adopts a two-phase training strategy and two prior knowledge incorporation methods to ensure an effective model training. Experimental results from two real-world datasets show that the visual features generated by PiNet better attend to the informative image regions, yielding superior performance.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    43
    References
    2
    Citations
    NaN
    KQI
    []