It is often assumed that parents completing behavior rating scales during the assessment of attention‐deficit/hyperactivity disorder (ADHD) can deliberately manipulate the outcomes of the assessment. To detect these actions, items designed to detect over‐reporting or under‐reporting of results are sometimes embedded in such rating scales. This study presents the results of an experimental study in which parents (a) read a scenario telling them that their hypothetical son's teacher has suggested their son may have ADHD and (b) considered assigned goals for the assessment. Parents then completed the accompanying Conners 3Parent Short form (Conners 3‐P[S]) in a manner that they believed would achieve their assigned goals. Findings showed that parents are able to engage in deception when completing behavior rating scales. The validity scales embedded in the Conners 3‐P(S), however, demonstrated mixed results for detecting parental deception with the Negative Impression validity scale, accurately detecting attempts to malinger in the majority of cases, whereas the Positive Impression validity scale appears to have little to no diagnostic utility for the detection of defensive responding. Clinicians utilizing behavior rating scales should carefully consider results, and nonresults, obtained from embedded validity scales when interpreting parent responses to behavior rating scales as part of an ADHD assessment.
Labels for scores stemming from intelligence tests have been employed since their inception in the United States. The purpose of this study was to systematically identify and document score labels for IQs used during the past 102 years. Using pairs of reviewers, score labels from 40 tests were reviewed, and 61 unique labels were identified. Comparative analyses by score range and decade were completed. Results indicate a paradigm shift beginning in the 1980s that has slowly resulted in more common, but not universal, use of terminology that focuses on the statistical aspect of scores, rather than employing value-laden and potentially stigmatizing terms. A universal score label system would help to avoid confusion, miscommunication, and biased decision making.
The representativeness, recency, and size of norm samples strongly influence the accuracy of inferences drawn from their scores. Inadequate norm samples may lead to inflated or deflated scores for individuals and poorer prediction of developmental and academic outcomes. The purpose of this study was to apply Kranzler and Floyd’s method for estimating norm block sample sizes via a review of the most prominent, individually administered intelligence tests. A rigorous, double-coding process was used to obtain these estimated sample sizes for 17 intelligence tests (10 full-length multidimensional tests, 4 nonverbal intelligence tests, and 3 brief intelligence tests). Overall, 47% of the tests failed to meet the minimum standard of at least 30 participants per norm block across age groups, and estimated norm block sizes were smallest for elementary school–age children. These results can inform intelligence test selection by practitioners and researchers, and they should be considered by test publishers when developing, revising, and reporting information about their tests.