More alike than different? A comparison of variance explained by cross-cultural models

2021 
Relatively little is known about the extent to which culture moderates findings in applied psychology research. To address this gap, we leverage the metaBUS database of over 1,000,000 published findings to examine the extent to which six popular cross-cultural models explain variance in findings across 136 bivariate relationships and 56 individual cultural dimensions. We compare moderating effects attributable to Hofstede’s dimensions, GLOBE’s practices, GLOBE’s values, Schwartz’s Value Survey, Ronen and Shenkar’s cultural clusters, and the United Nations’ M49 standard. Results from 25,296 multilevel meta-analyses indicate that, after accounting for statistical artifacts, cross-cultural models explain approximately 5–7% of the variance in findings. The variance explained did not vary substantially across models. A similar set of analyses on observed effect sizes reveal differences of |r| = .05–.07 attributable to culture. Variance among the 136 bivariate relationships was explained primarily by sampling error, indicating that cross-cultural moderation assessments require atypically large sample sizes. Our results provide important information for understanding the overall level of explanatory power attributable to cross-cultural models, their relative performance, and their sensitivity to variance in the topic of study. In addition, our findings may be used to inform power analyses for future research. We discuss implications for research and practice.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    60
    References
    0
    Citations
    NaN
    KQI
    []