Outcome-based Partner Selection in Collective Risk Dilemmas

2019 
Understanding how to design agents that sustain cooperation in multi-agent systems has been a long lasting goal in distributed Artificial Intelligence. Solutions proposed rely on identifying defective agents and avoid cooperating or interacting with them. These mechanisms of social control are traditionally studied in games with linear and deterministic payoffs, such as the Prisoner's Dilemma or the Public Goods Game. In reality, however, agents often face dilemmas in which payoffs are uncertain and non-linear, as collective success requires a minimum number of cooperators. These games are called Collective Risk Dilemmas (CRD), and it is unclear whether the previous mechanisms of cooperation remain effective in this case. Here we study cooperation in CRD through partner-based selection. First, we discuss an experiment in which groups of humans and robots play a CRD. We find that people only prefer cooperative partners when they lose a previous game i.e., when collective success was not previously achieved). Secondly, we develop a simplified evolutionary game theoretical model that sheds light on these results, pointing the evolutionary advantages of selecting cooperative partners only when a previous game was lost. We show that this strategy constitutes a convenient balance between strictness (only interact with cooperators) and softness (cooperate and interact with everyone), thus suggesting a new way of designing agents that promote cooperation in CRD.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    49
    References
    0
    Citations
    NaN
    KQI
    []