Conditions for Backtracking with Counterfactual Conditionals

2014 
Conditions for Backtracking with Counterfactual Conditionals Jung-Ho Han (JungHo_Han@Brown.Edu) 1 William Jimenez-Leal (W.JimenezLeal@Uniandes.Edu.Co) 2 Steven A. Sloman (Steven_Sloman@Brown.Edu) 1 Cognitive, Linguistic, and Psychological Sciences, Brown University, Box 1821 Providence, RI 02912 USA Departamento de Psicologia, Universidad de los Andes. Cra. 1 No. 18A-12, Edificio Franco, Bogota, 17111. Colombia Counterfactual conditionals concern relations in other possible worlds. Most of these possible worlds refer to how a situation would have unfolded forward from a counterfactual assumption. In some cases, however, reasoning goes backward from the assumption, a phenomenon that is called backtracking. In the current study, we propose that people backtrack if and only if doing so will make a counterfactual claim true in the alternative world. We present evidence to support the proposal. non-diagnostic (therefore not implying that it had not been set up correctly). Some researchers have attempted to explain the meaning of this sort of counterfactual by either subscribing to a dual explanation, one for forward and one for backward counterfactuals (Dehghani et al., 2012; Rips, 2010; Rips & Edwards, 2013) or to an alternative unified model (Lucas & Kemp, 2012). In this paper we focus on some conditions that make backtracking possible when reasoning with non-backtracking counterfactuals. Keywords: counterfactual backtracking; causality; inference. How to Backtrack Abstract Introduction Counterfactual conditionals are used in a variety of situations, from figures of speech (‘if wishes were horses, beggars would ride’) to causal inference (‘if policy X had been implemented, millions of dollars could have been saved’). Recent psychological research has tried to clarify the link between counterfactuals and causal inference (Sloman & Pearl, 2013, for reviews), inspired by ideas from the causal modelling framework (Pearl, 2000). Briefly, the guiding hypothesis has been that counterfactuals are represented using a special kind of operator that consists of intervening on a variable in a causal model in order to infer its effects. Such interventions consist of locally modifying the actual value of the variable, while disconnecting from its causal ancestors. In this context, counterfactual reasoning about the implementation of policy X enables one to draw conclusions about the possible causal consequences of the policy, but does not give information about what other factors would have had to change for the policy to have been introduced. Attention has focused on backtracking counterfactuals, a special type of counterfactual conditional whose antecedent allows inferring the value of upstream variables (Dehghani, Iliev, & Kaufmann, 2012; Rips, 2010; Rips & Edwards, 2013; Sloman & Lagnado, 2005). Consider, for example, the following conditional: “If the alarm had not gone off, it would have meant that I did not set it up correctly”. In this case, the antecedent of the counterfactual is diagnostic of an earlier cause. While it is clear that this inference also depends on the appropriate causal representation of the world, it seems to fall outside the scope of the account proposed within the causal modelling framework (Sloman & Lagnado, 2005) because if the antecedent (the alarm clock not going off) were intervened on via the do operator, it would be rendered independent of its causes and hence Causal Bayes nets (Pearl, 2000) have been widely used to understand how people represent, and reason with, causal information. The power of this representation is derived from the use of the do operator, which allows reasoners to represent the effects of actions on a causal structure, and thus to make not only observational but also interventional inferences. The do operator sets the value of a variable (do(X=x)) which allows inference of the effects of X. The intervention is assumed to cut off the variable from its normal causes, thus rendering it non-diagnostic of those causes. Consider the case of a transitive causal relationship from A to B and then to C. Intervening on B produces a model where C is the effect of B (represented by the arrow from B to C), but the intervention on B provides no information about the state of A (represented by the grey line from A to B). Figure 1: Transitive causal relationship. Under certain conditions, people exhibit an undoing effect (non-diagnosticity of the intervened-on variable) and reason according to the logic of intervention (Sloman & Lagnado, 2005; Waldmann & Hagmayer, 2005). Counterfactual conditionals can thus be conceived as an inference from an imagined intervention, where the antecedent is the variable intervened on, and the consequent is the effect read off from the causal model. Rips (2010) has shown that the do operator does not apply in other cases of counterfactual reasoning. In his experiments, participants answered counterfactual questions about hypothetical mechanical devices, questions that
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    1
    Citations
    NaN
    KQI
    []