Learning disentangled user representation with multi-view information fusion on social networks

2021 
Abstract User representation learning is one prominent and critical task of user analysis on social networks, which derives conceptual user representations to improve the inference of user intentions and behaviors. Previous efforts have shown its substantial value in multifarious real-world applications, including product recommendation, textual content modeling, link prediction, and many more. However, existing studies either underutilize multi-view information, or neglect the stringent entanglement among underlying factors that govern user intentions, thus deriving deteriorated representations. To overcome these shortages, this paper proposes an adversarial fusion framework to fully exploit substantial multi-view information for user representation, consisting of a generator and a discriminator. The generator learns representations with a variational autoencoder, and is forced by the adversarial fusion framework to pay specific attention to substantial informative signs, thus integrating multi-view information. Furthermore, the variational autoencoder used in the generator is novelly designed to capture and disentangle the latent factors behind user intentions. By fully utilizing multi-view information and achieving disentanglement, our model learns robust and interpretable user representations. Extensive experiments on both synthetic and real-world datasets demonstrate the superiority of our proposed model.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    41
    References
    2
    Citations
    NaN
    KQI
    []