logo
    Adjusting Variance Parameters to Incorporate Uncertainty into Health Economics Models Following Treatment Switching
    0
    Citation
    0
    Reference
    10
    Related Paper
    In many applications, it is important to be able to explain the decisions of machine learning systems. An increasingly popular approach has been to seek to provide \emph{counterfactual instance explanations}. These specify close possible worlds in which, contrary to the facts, a person receives their desired decision from the machine learning system. This paper will draw on literature from the philosophy of science to argue that a satisfactory explanation must consist of both counterfactual instances and a causal equation (or system of equations) that support the counterfactual instances. We will show that counterfactual instances by themselves explain little. We will further illustrate how explainable AI methods that provide both causal equations and counterfactual instances can successfully explain machine learning predictions.
    Counterfactual conditional
    Citations (0)
    Counterfactual explanations are gaining popularity as a way of explaining machine learning models. Counterfactual examples are generally created to interpret the decision of a model. In this case, if a model makes a certain decision for an instance, the counterfactual examples of that instance reverse the decision of the model. The counterfactual examples can be created by craftily changing particular feature values of the instance. In this work, we explore other potential application areas of utilizing counterfactual examples other than model explanation. We are particularly interested in exploring whether counterfactual examples can be a good candidate for data augmentation. At the same time, we look for ways of validating the generated counterfactual examples.
    Popularity
    Counterfactual conditional
    Feature (linguistics)
    Citations (2)
    In two investigations we explored whether different aspects of counterfactual tasks, such as an alternative response mode, a different question type and additional clarifying wording could influence children's performance on such tasks. Our first investigation manipulated the response mode by allowing children answering to a counterfactual task either by arrow or finger pointing, and the question type by using both standard tasks in which children were told a story and had to generate counterfactual alternatives to it and counterfactual-to-reality stories where children had to infer reality from a given counterfactual. The arrow manipulation proved to be fragile and did not influence children's performance on counterfactual tasks. The question type manipulation suggested an asymmetry between the real and the counterfactual world, with inferring reality from counterfactual alternatives easier than the reverse. Our second investigation explored whether children's performance on complex counterfactual trials, such as the discriminating trials used by Rafetseder, Schwitalla, and Perner (2013) could be supported by additional clarifying wording. We demonstrated that although children found complex counterfactual trials difficult at the age of 5 and 6, additional wording did significantly improve their performance. Children's counterfactual responses to this sort of task were supported through additional wording.
    Counterfactual conditional
    Citations (0)
    Abstract Neural attention mechanism has been used as a form of explanation for model behavior. Users can either passively consume explanation or actively disagree with explanation and then supervise attention into more proper values (attention supervision). Though attention supervision was shown to be effective in some tasks, we find the existing attention supervision is biased, for which we propose to augment counterfactual observations to debias and contribute to accuracy gains. To this end, we propose a counterfactual method to estimate such missing observations and debias the existing supervisions. We validate the effectiveness of our counterfactual supervision on widely adopted image benchmark datasets: CUFED and PEC.
    Benchmark (surveying)
    In many applications, it is important to be able to explain the decisions of machine learning systems. An increasingly popular approach has been to seek to provide \emph{counterfactual instance explanations}. These specify close possible worlds in which, contrary to the facts, a person receives their desired decision from the machine learning system. This paper will draw on literature from the philosophy of science to argue that a satisfactory explanation must consist of both counterfactual instances and a causal equation (or system of equations) that support the counterfactual instances. We will show that counterfactual instances by themselves explain little. We will further illustrate how explainable AI methods that provide both causal equations and counterfactual instances can successfully explain machine learning predictions.
    Counterfactual conditional
    Citations (1)
    We examined whether counterfactual thinking influences the experience of envy. Counterfactual thinking refers to comparing the situation as it is to what it could have been, and these thought processes have been shown to lead to a variety of emotions. We predicted that for envy the counterfactual thought "it could have been me" would be important. In four studies we found a clear link between such counterfactual thoughts and the intensity of envy. Furthermore, Studies 3 and 4 revealed that a manipulation known to affect the extent of counterfactual thinking (the perception of being close to obtaining the desired outcome oneself), had an effect on the intensity of envy via counterfactual thoughts. This relationship between counterfactual thinking and the experience of envy allows for new predictions concerning situations under which envy is likely be more intense.
    Counterfactual conditional
    Affect
    Neural attention mechanism has been used as a form of explanation for model behavior. Users can either passively consume explanation, or actively disagree with explanation then supervise attention into more proper values (attention supervision). Though attention supervision was shown to be effective in some tasks, we find the existing attention supervision is biased, for which we propose to augment counterfactual observations to debias and contribute to accuracy gains. To this end, we propose a counterfactual method to estimate such missing observations and debias the existing supervisions. We validate the effectiveness of our counterfactual supervision on widely adopted image benchmark datasets: CUFED and PEC.
    Benchmark (surveying)
    Citations (3)
    • Counterfactual reasoning is a high-level causality-based cognitive approach. • Prior AI tourism research often overlooked potential causal effects in data. • AI-based counterfactual reasoning captures causal effects for tourism big data. • AI-based counterfactual reasoning complements experimental design.
    Causality
    Causal reasoning
    Counterfactual conditional
    Causal model
    To explore the characteristics of counterfactual thinking in eldmen individuals,This study firstly employed cued Counterfactual Thinking,280 oldmen and undergraduates students ones(control group) were selected as the subject's for this study.instructions were offered to induce the subject counterfactual thinking.There is statistical significance in the differene between the eldmen group and group about categories of the whole counterfactual thinking.Counterfactual thinking in the elderly is different from other groups with the characteristics of counterfactual thinking.Older individuals the least number of counterfactual thinking;and counterfactual thinking the highest reasonable.
    Counterfactual conditional
    Vertical thinking
    Citations (0)