Effects of Sampling Ecology on Correlational Judgment
2006
Effects of Sampling Ecology on Correlational Judgment Richard B. Anderson (randers@bgsu.edu) Department of Psychology, Bowling Green State University 206 Psychology Building, Bowling Green, OH 43403 USA Michael E. Doherty (mdoher2@bgsu.edu) Department of Psychology, Bowling Green State University 206 Psychology Building, Bowling Green, OH 43403 USA Justin M. Gilkey (jgilkey@bgsu.edu) Department of Psychology, Bowling Green State University 206 Psychology Building, Bowling Green, OH 43403 USA Keywords: correlation; covariation; estimation; judgment Results and Discussion Introduction Given the importance of correlation perception for adaptive behavior, it is no surprise that people's estimates of correlation are sensitive to the objective correlations (e.g., Jennings, Amabile, & Ross, 1980). However, theoretical work suggests if people treat sample correlations (r) as the best estimates of population correlations (ρ) they should tend to produce inflated estimates of ρ, especially with small samples (e.g., Anderson, Doherty, Berg, & Friedrich, 2005). In contrast to Kareev, Lieberman, and Lev (1997), the present studies manipulated n in a procedure wherein participants estimated correlations by estimating population frequencies from randomly drawn samples (the estimates would be converted to subjective correlations for analysis). In addition, Experiment 2 included a confidence rating task, as in Clement, Mercier, and Pasto (2002). It was predicted (see Anderson et al., 2005) that estimates of ρ, derived from the frequency estimates, would be inflated for smaller relative to larger samples, and that this effect would be greater for higher than for lower levels of objective ρ. Because the positive and negative conditions of ρ were mathematically equivalent, the conditions were combined for purposes of analysis (thus, an estimate was scored as negative only when it was directionally opposite from objective ρ). For Experiment 1, the mean subjective ρ increased with objective ρ, F(2, 38) = 43.91, p < .001, and contrary to predictions, increased with n, F(2, 38) = 4.16, p = .023. Also, the effect of n was greater for higher levels of objective ρ, F(4, 76) = 3.63, p < .009. Experiment 2 yielded qualitatively similar results, and also showed that confidence increased with objective ρ. The particular relationship between estimation, confidence, and n, found by Clement et al. (2002), was not fully replicated. The findings suggest that decision-makers' information processing may substantially alter and reverse some of the informational biases intrinsic to the information ecology. Acknowledgments Amanda Kelley assisted with data collection. This research was supported by a grant from the National Science Foundation. Method On each trial participants saw a sequence of 3, 7, or 15 drawings (one drawing per two seconds). Each drawing was a facial caricature that had narrow or wide shape and a happy or sad expression. Each sequence was selected randomly from a population in which the correlation between facial width and facial expression was 0, .4, .8, -.4, or -.8. After viewing a sequence, participants estimated the frequency of occurrence of two particular combinations of levels of facial width and facial expression, from which the researchers later computed a subjective ρ. For example, participants estimated the number of 1000 narrow faces that were happy and the number of 1000 wide faces that were happy. In addition, each trial of Experiment 2 ended with the participant rating his/her confidence in the estimates. Each experiment was a within-participant factorial, with 12 or 6 trials per condition per participant (in Experiments 1 and 2, respectively). References Anderson, R. B., Doherty, M. E., Berg, N. D., & Friedrich, J. C. (2005). Sample size and the detection of correlation—A signal detection account: Comment on Kareev (2000) and Juslin and Olsson (2005). Psychological Review, 112, 268-279. Clement, M., Mercier, P., & Pasto, L. (2002). Sample size, confidence, and contingency judgment. Canadian Journal of Experimental Psychology, 56, 128-137. Jennings, D. L., Amabile, T. M., & Ross, L. (1982). Informal covariation assessment: Data-based versus theory-based judgments. In D. Kahneman, P. Slovic, & A. Tversky (Eds.), Judgment under uncertainty: Heuristics and biases. Cambridge: Cambridge University Press. Kareev, Y., Lieberman, I., & Lev, M. (1997). Through a narrow window: Sample size and the perception of correlation. Journal of Experimental Psychology: General, 126, 278-287.
Keywords:
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
4
References
1
Citations
NaN
KQI