Contingency-Aware Influence Maximization: A Reinforcement Learning Approach

2021 
The influence maximization (IM) problem aims at finding a subset of seed nodes in a social network that maximize the spread of influence. In this study, we focus on a sub-class of IM problems, where whether the nodes are willing to be the seeds when being invited is uncertain, and the seeds are selected in multiple rounds. From a real-world study that uses IM algorithms to help spread the awareness of HIV among the homeless youth, it is found that a practical challenge in applying the classical greedy IM algorithms is the lack of high performance computing (HPC) -- whenever there is a new social network, the non-profits usually do not have the HPC resources to recalculate the solutions. Motivated by this and inspired by the line of works that use reinforcement learning (RL) to address combinatorial optimization on graphs, we formalize the underlying problem as a Markov Decision Process (MDP), and use RL to learn an IM policy over historically seen networks, and generalize to unseen networks at test time. Once trained, RL generates an IM policy with negligible runtime. To fully exploit the properties of our targeted problem, we propose two technical innovations that improve the existing methods, including state-abstraction and theoretically grounded reward shaping. Empirical results show that our proposed method can achieve influence as high as the state-of-the-art IM methods used in the HIV prevention domain, whereas having negligible runtime during test phase.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []