Like many web platforms, Facebook is under pressure to regulate misinformation. According to the company, users that repeatedly share misinformation ('repeat offenders') will have their distribution reduced, but little is known about the implementation or the impacts of this measure. The first contribution of this paper is to offer a methodology to investigate the implementation and consequences of this measure, which relies on an analysis combining fact-checking and engagement metrics data. Using a Science Feedback and a Social Science One (Condor) datasets, we identified a set of public accounts (groups and pages) that have shared misinformation repeatedly during the 2019–2020 period. We find that the engagement per post decreased significantly for Facebook pages after they shared two or more 'false news'. The median decrease for pages identified with the Science Feedback dataset is −43%, while this value reaches −62% for pages identified using the Condor dataset. In a different approach, we identified a set of pages claiming to be under 'reduced distribution' for repeatedly sharing misinformation and having received a notification from Facebook. With this set of pages, we observed a median decrease of −25% in engagement per post averaged over 30 days after receiving the notification minus 30 days before. We show that this 'repeat offenders' penalty did not apply to Facebook groups. Instead, we discover that groups have been affected in a different way with a sudden drop in their average engagement per post that occurred around June 9, 2020. While this drop has cut the groups' engagement per post in about half, this decrease was compensated by the fact that these accounts have doubled their number of posts between early 2019 and summer 2020. The net result is that the total engagement on posts from 'repeat offender' accounts (including both pages and groups) returned to its early 2019 levels. Overall, Facebook's policy thus appears to be able to contain the increase in misinformation shared by 'repeat offenders' rather than to decrease it.
Abstract Positivity bias refers to learning more from positive than negative events. This learning asymmetry could either reflect a preference for positive events in general, or be the upshot of a more general, and perhaps, ubiquitous, “choice-confirmation” bias, whereby agents preferentially integrate information that confirms their previous decision. We systematically compared these two theories with 3 experiments mixing free- and forced-choice conditions, featuring factual and counterfactual learning and varying action requirements across “go” and “no-go” trials. Computational analyses of learning rates showed clear and robust evidence in favour of the “choice-confirmation” theory: participants amplified positive prediction errors in free-choice conditions while being valence-neutral on forced-choice conditions. We suggest that a choice-confirmation bias is adaptive to the extent that it reinforces actions that are most likely to meet an individual’s needs, i.e. freely chosen actions. In contrast, outcomes from unchosen actions are more likely to be treated impartially, i.e. to be assigned no special value in self-determined decisions.
Facebook has claimed to fight misinformation notably by reducing the virality of posts shared by “repeat offender” websites. The platform recently extended this policy to groups. We identified websites and groups that repeatedly publish false information according to fact checkers and investigated the implementation and impact of Facebook’s measures against them. Our analysis reveals a significant reduction in engagement per article/post following the publication of two or more “false” links. These results highlight the need for systematic investigation of web platforms’ measures designed to limit the spread of misinformation to better understand their effectiveness and consequences.
Most people envision themselves as operant agents endowed with the capacity to bring about changes in the outside world. This ability to monitor one's own causal power has long been suggested to rest upon a specific model of causal inference, i.e., a model of how our actions causally relate to their consequences. What this model is and how it may explain departures from optimal inference, e.g., illusory control and self-attribution biases, are still conjecture. To address this question, we designed a series of novel experiments requiring participants to continuously monitor their causal influence over the task environment by discriminating changes that were caused by their own actions from changes that were not. Comparing different models of choice, we found that participants' behaviour was best explained by a model deriving the consequences of the forgone action from the current action that was taken and assuming relative divergence between both. Importantly, this model agrees with the intuitive way of construing causal power as "difference-making" in which causally efficacious actions are actions that make a difference to the world. We suggest that our model outperformed all competitors because it closely mirrors people's belief in their causal power - a belief that is well-suited to learning action-outcome associations in controllable environments. We speculate that this belief may be part of the reason why reflecting upon one's own causal power fundamentally differs from reasoning about external causes.