logo
    Are anchoring vignettes ratings sensitive to vignette age and sex
    0
    Citation
    0
    Reference
    19
    Related Paper
    Abstract:
    Anchoring vignettes are commonly used to study and correct for differential item functioning and response bias in subjective survey questions. Self-assessed health status is a leading example. A crucial assumption of the vignette methodology is ’vignette equivalence’: The health status of the person described in the vignette must be perceived by all respondents in the same way. We use data from a survey experiment conducted with a sample of almost 5000 older Americans to validate this assumption. We find weak evidence that respondents’ vignette ratings may be sensitive to the sex and, for older respondents, also to the age (implied by the first name) of the person described in the vignette. Our findings suggest that vignette equivalence may not hold, at least if the potentially subtle connotations of vignette persons’ names are not fully controlled. Copyright
    Keywords:
    Vignette
    Differential item functioning
    Anchoring
    Halo effect
    Information systems research routinely uses survey experiments as a research design. Treatment variables in survey experiments involve exposing one or more groups to new information, misinformation, or vignettes. We identify a problem of treatment recall bias that affects the magnitude and significance of a treatment’s observed effect. Respondents need to recall the salient features of the treatment in order for it to have an effect. When salient treatment details are not recalled, respondents are likely to answer survey questions in a way that is statistically indistinguishable from the control group. The treatment might still have an effect on the outcome in question, but the results could be masked by respondents who do not retain salient features of the treatment. Using new survey data, we highlight how the nature of the treatment itself – text over images – significantly affects the retention of treatment details. We also show how temporal factors such as survey duration, time on treatment, engagement with treatment, and a treatment to survey duration ratio affect treatment retention. Our findings suggest that the best results from survey experiments should employ text-based treatments and calibrate time on treatment and duration based upon the best practices presented below.
    Citations (2)
    Traditional approaches to understanding race of interviewer (ROI) effects have focused more on respondent error than interviewer and survey design error. Respondents are hypothesized to alter their responses to certain items in an effort to maintain a favorable impression for the interviewer. Yet, this framework assumes respondents, interviewers, and questions all work in concert to produce the effect. The variability among these components of the survey system actually makes producing a statistically significant interviewer quite difficult. We argue that it is those interviewers who are assigned higher workloads, and therefore interact with more respondents, who are exacerbating the ROI effects to the point of statistical significance. We analyze individual and aggregate pre-election data from the 1984 National Black Election Study (NBES), finding support for our hypotheses. Feeling thermometer scores that were initially influenced by the interviewer’s race, were subsequently reduced to non-significance when interviewer workload was considered. Our findings suggest the often elusive ROI effect is potentially more related to survey design features than psychological processes related to impression management.
    Interview
    Citations (2)
    Respondent inattentiveness threatens to undermine experimental studies. In response, researchers incorporate measures of attentiveness into their analyses, yet often in a way that risks introducing post-treatment bias. We propose a design-based technique—mock vignettes (MVs)—to overcome these interrelated challenges. MVs feature content substantively similar to that of experimental vignettes in political science, and are followed by factual questions (mock vignette checks [MVCs]) that gauge respondents’ attentiveness to the MV. Crucially, the same MV is viewed by all respondents prior to the experiment. Across five separate studies, we find that MVC performance is significantly associated with (1) stronger treatment effects, and 2) other common measures of attentiveness. Researchers can therefore use MVC performance to re-estimate treatment effects, allowing for hypothesis tests that are more robust to respondent inattentiveness and yet also safeguarded against post-treatment bias. Lastly, our study offers researchers a set of empirically-validated MVs for their own experiments.
    Vignette
    Citations (9)
    Cognitive biases are known to affect all aspects of human decision-making and reasoning. Examples include misjudgment of probability, preferential attention to evidence that confirms one's beliefs, and preference for certainty. It is not known whether cognitive biases influence orthopaedic surgeon decision-making. This study measured the influence of a few cognitive biases on orthopaedic decision-making in hypothetical vignettes. The questions we addressed were as follows: Do orthopaedic surgeons display the cognitive biases of base rate neglect and confirmation bias in hypothetical vignettes? Can anchoring and framing biases be demonstrated?One hundred ninety-six orthopaedic surgeons completed a survey consisting of three vignettes evaluating base rate neglect, five evaluating confirmation bias, and two separate vignettes each randomly exposing half of the group to different anchors and frames.For the three vignettes evaluating base rate neglect, 43% (84 of 196) chose answers consistent with base rate neglect in vignette 1, 88% (173 of 196) in vignette 2, and 35% (69 of 196) in vignette 3. Regarding confirmation bias, 51% (100 of 196) chose an answer consistent with confirmation bias for vignette 1, 11% (22 of 196) for vignette 2, 22% (43 of 196) for vignette 3, 22% (44 of 196) for vignette 4, and 29% (56 of 196) for vignette 5. There was a measurable anchoring heuristic (56% versus 34%; a difference of 22%) and framing effect (77% versus 61%; a difference of 16%).The influence of cognitive biases can be documented in patient vignettes presented to orthopaedic surgeons. Strategies can anticipate cognitive bias and develop practice debiasing strategies to limit potential error.
    Vignette
    Framing effect
    Anchoring
    Confirmation bias
    Framing (construction)
    Citations (20)
    AbstractIn two experiments participants were instructed to set aside their own, complete knowledge of a statistical population parameter and to take the perspective of an agent whose knowledge was limited to a random sample. Participants rated the appropriateness of the agent's conclusion about the adequacy of the sample size (which, objectively, was more than adequate). They also rated the agent's intelligence. Whereas previous work suggests that unbelievable statistical conclusions impact reasoning by provoking critical thought which enhances the detection of research flaws, the present studies presented participants an unflawed scenario designed to assess effects of believability on bias. The results included the finding that participants’ complete knowledge did indeed bias their perceptions not only of the adequacy of the sample size, but also of the rationality of the agent drawing the conclusion from the sample. The findings were interpreted in the context of research on belief bias, social attribution, and Theory of Mind.Keywords: ReasoningInferenceAttributionRationalityStatistics
    Sample (material)
    Attribution bias
    Abstract When psychologists are asked to estimate the probability that their clinical judgments are correct, they often reveal an overconfidence effect. In an effort to identify sources of unwarranted confidence in clinical judgment, this study examined the relationship between four different inferential biases (dispositionism, confirmationism, data‐search truncation, and narrow problem construal) and diagnostic confidence in the context of a psychological assessment task. Thirty‐six clinicians were individually presented a written client case‐file to read and clinically interpret aloud. Analyses of participants' verbal protocols revealed that dispositionism alone accounted for a significant proportion of the variance in psychodiagnostic confidence scores. These results indicate that dispositionally‐driven assessments tend to be expressed with the highest levels of confidence. The roles that professional psychology and psychology in general play in propagating this bias are considered. Copyright © 2002 John Wiley & Sons, Ltd.
    Overconfidence effect
    Citations (18)
    Cognitive interviewing is used to identify problems in questionnaires under development by asking a small number of pretest participants to verbally report their thinking while answering the draft questions. Just as responses in production interviews include measurement error, so the detection of problems in cognitive interviews can include error. In the current study, we examine error in the problem detection of both cognitive interviewers evaluating their own interviews and independent judges listening to the full set of interviews. The cognitive interviewers were instructed to probe for additional information in one of two ways: the Conditional Probe group was instructed to probe only about what respondents had explicitly reported; the Discretionary Probe group was instructed to probe whenever they felt it appropriate. Agreement about problems was surprisingly low overall, but differed by interviewing technique. The Conditional Probe interviewers uncovered fewer potential problems but with higher inter-judge reliability than did the Discretionary Probe interviewers. These differences in reliability were related to the type of probes. When interviewers in either group probed beyond the content of respondents’ verbal reports, they were prone to believe that the respondent had experienced a problem when the majority of judges did not believe this to be the case (false alarms). Despite generally poor performance at the level of individual verbal reports, judges reached relatively consistent conclusions across the interviews about which questions most needed repair. Some practical measures may improve the conclusions drawn from cognitive interviews but the quality of the findings is limited by the content of the verbal reports.
    Cognitive interview
    Interview
    Citations (0)
    Abstract Directly eliciting individuals' subjective beliefs via surveys is increasingly popular in social science research, but doing so via face-to-face surveys has an important downside: the interviewer's knowledge of the topic may spill over onto the respondent's recorded beliefs. Using a randomized experiment that used interviewers to implement an information treatment, we show that reported beliefs are significantly shifted by interviewer knowledge. Trained interviewers primed respondents to use the exact numbers used in the training, nudging them away from higher answers; recorded responses decreased by about 0.3 standard deviations of the initial belief distribution. Furthermore, respondents with stronger prior beliefs were less affected by interviewer knowledge. We suggest corrections for this issue from the perspectives of interviewer recruitment, survey design, and experiment setup.
    Interview
    Need to know
    Expert elicitation
    Citations (10)