Adaptive behavior scales are vital in assessing children and adolescents who experience a range of disabling conditions in school settings. This article presents the results of an evaluation of the design characteristics, norming, scale characteristics, reliability and validity evidence, and bias identification studies supporting 14 norm-referenced, informant-based interviews and rating scales designed to measure adaptive behaviors. To derive these results, the manuals for each of these scales were reviewed using a standardized coding procedure, and information about each scale was double-coded by reviewers. Findings reveal that several evidence-based adaptive behavior scales are available to school psychologists. Concluding recommendations address selection and use of adaptive behavior scales as part of a comprehensive assessment, using the optimal methods of administration of adaptive behavior scales, and interpreting resultant scores that have demonstrated the highest levels of reliability and the largest body of validity evidence.
Abstract This study examined the exchangeability of total scores (i.e., intelligent quotients [IQs]) from three brief intelligence tests. Tests were administered to 36 children with intellectual giftedness, scored live by one set of primary examiners and later scored by a secondary examiner. For each student, six IQs were calculated, and all 216 values were submitted to a generalizability theory analysis. Despite strong convergent validity and reliability evidence supporting brief IQs, the resulting dependability coefficient was only .80, which indicates relatively low exchangeability across tests and examiners. Although error variance components representing the effects of the examiner, examiner‐by‐examinee interaction, the examiner‐by‐test interaction, and the test contributed little to IQ variability, the component representing the test‐by‐examinee interaction contributed about one‐third of the variance in IQs. These findings hold implications for selecting and interpreting brief intelligence tests and general testing for intellectual giftedness.
The representativeness, recency, and size of norm samples strongly influence the accuracy of inferences drawn from their scores. Inadequate norm samples may lead to inflated or deflated scores for individuals and poorer prediction of developmental and academic outcomes. The purpose of this study was to apply Kranzler and Floyd’s method for estimating norm block sample sizes via a review of the most prominent, individually administered intelligence tests. A rigorous, double-coding process was used to obtain these estimated sample sizes for 17 intelligence tests (10 full-length multidimensional tests, 4 nonverbal intelligence tests, and 3 brief intelligence tests). Overall, 47% of the tests failed to meet the minimum standard of at least 30 participants per norm block across age groups, and estimated norm block sizes were smallest for elementary school–age children. These results can inform intelligence test selection by practitioners and researchers, and they should be considered by test publishers when developing, revising, and reporting information about their tests.
Purpose Individuals with autism spectrum disorder (ASD) often receive services from a variety of professionals. However, not all providers receive adequate training in ASD. The Leadership Education in Neurodevelopmental and Related Disabilities (LEND) program includes a core competency of increasing knowledge about neurodevelopmental and related disabilities. This study attempted to assess trainees’ ASD knowledge and self-reported confidence in working with individuals with ASD and sought to understand if training through the LEND program increases these competencies. Additionally, the purpose of this study is to determine factors that predict ASD knowledge and self-reported confidence in providing services to this population, specifically in an interdisciplinary trainee sample. Design/methodology/approach Participants were 170 interdisciplinary LEND trainees during the 2017–2018 academic year. Participants across the USA completed online pre- and posttraining surveys. The survey included demographics, ASD knowledge, questions assessing training experiences, perceived ASD knowledge and self-reported confidence. Findings A one-way analysis of variance determined that there was a statistically significant difference in measured ASD knowledge across disciplines F (7, 148) = 5.151, p < .001. Clinical trainees (e.g. psychology, pediatrics and speech) exhibited more measured ASD knowledge than nonclinical trainees (e.g. neuroscience, legal). Additionally, training experiences, self-reported confidence and perceived ASD knowledge were predictors of measured ASD knowledge. Moreover, trainees increased their measured ASD knowledge, self-reported confidence and had more experiences with individuals who have ASD at the end of the training year. Originality/value These findings suggest that the LEND program may assist in the preparation of professionals to work with individuals with ASD. Training opportunities, including educational and practical experience, to train interdisciplinary providers who will work with individuals with ASD are advised.