Stochastic Bandits with Graph Feedback in Non-Stationary Environments.

2021 
We study a variant of stochastic bandits where the feedback model is specified by a graph. In this setting, after playing an arm, one can observe rewards of not only the played arm but also other arms that are adjacent to the played arm in the graph. Most of the existing work assumes the reward distributions are stationary over time, which, however, is often violated in common scenarios such as recommendation systems and online advertising. To address this limitation, we study stochastic bandits with graph feedback in non-stationary environments and propose algorithms with graph-dependent dynamic regret bounds. When the number of reward distribution changes L is known in advance, one of our algorithms achieves an O(√(αLT)) dynamic regret bound. We also develop an adaptive algorithm that can adapt to unknown L and attain an O(√(θLT)) dynamic regret. Here, α and θ are some graph-dependent quantities and T is the time horizon.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    43
    References
    1
    Citations
    NaN
    KQI
    []