Due to physical distancing guidelines, the closure of non-essential businesses, and the closure of public schools, the role of telehealth for the delivery of psychological services for children has never been more debated. However, the transition to teleassessment is more complicated for some types of assessment than others. For instance, the remote administration of achievement and intelligence tests is a relatively recent adaptation of telehealth, and despite recommendations for rapid adoption by some policy makers and publishing companies, caution and careful consideration of individual and contextual variables, the existing research literature as well as measurement, cultural and linguistic, and legal and ethical issues is warranted. The decision to use remotely administered achievement and intelligence tests is best made on a case-by-case basis after consideration of these factors. We discuss each of these issues as well as implications for practice, policy, and as well as issue provisional guidance for consideration for publishing companies interested in these endeavors moving forward.
Although the field of school psychology has made progress toward the use of tests and assessment practices with empirical support over the past 20 years, many school psychology practitioners still engage in what can be described as low-value value assessment practices that lack compelling scientific support potentially taking time and resources away from practices that have a demonstrated evidence-base. Why do school psychologists engage in questionable assessment and interpretive practices despite decades of discrediting scientific evidence? This article critically examines several plausible explanations for the perpetuation of low-value practices in school psychology assessment. It also underscores the importance of critical thinking when evaluating assessment and interpretation practices, and discusses practical recommendations to assist in advancing evidence-based assessment in school psychology training and practice as the field progresses well-into the 21st century.Impact StatementMany school psychologists engage in assessment practices that lack compelling scientific support potentially taking time, resources, and energy away from more effective practices. This article critically reviews reasons why these questionable assessment practices persist long after discrediting scientific evidence has been aptly presented. Recommendations are offered to promote the use of evidence-based practices and discourage the use of assessment methods lacking compelling empirical support in training and clinical practice.
This study examined the group and individual part score profiles of individuals with mild intellectual disability (ID) who participated in a clinical validity study supporting the Wechsler Intelligence Scale for Children, Fifth Edition (WISC-V) and a comparison group without ID derived from the WISC-V norming sample. Descriptive analyses revealed that both groups exhibited flat profiles of part scores (i.e., the five WISC-V factor indexes), with mean global composites and part scores in the Very Low range for the ID group and in the Average range for the comparison group. However, few participants in either group exhibited profiles similar to their respective group profile, and many in both groups exhibited substantial profile scatter. Examination of global composites and part scores obtained by individuals with ID revealed that the WISC-V's global composites consistently identified those with ID using score thresholds of 70 to 75, but many individuals with ID obtained part scores much higher than this criterion. No meaningful differences were found between Black and White participants with ID. Implications focus on the risks associated with using part score elevation or profile scatter to defer identification of ID when all other criteria for the condition have been met.Impact StatementBecause the cognitive profile of those with intellectual disability (ID) is generally low and flat, indicating a deficit in overall intellectual functioning with little variation among part scores, many presume that individuals with this disorder will exhibit the same pattern. Although most individuals with ID exhibit a low IQ consistent with the disorder, many exhibit much higher part scores, and significant part score scatter is common in individuals with ID. Failure to identify individuals with ID due to part score elevation or to significant profile scatter will likely have unintended negative consequences.
Abstract Given significant changes to legislation, practice, research, and instrumentation, the purpose of this study was to examine the course on cognitive assessment in school psychology programs and to describe the (a) structure, (b) instructional strategies, (c) content, and (d) interpretative strategies taught to school psychology graduate students. One hundred and twenty‐seven instructors were surveyed, and results suggest that over the last 20 years support for teaching cognitive assessment has decreased while the content and instructional strategies have remained largely the same. Results of this study also indicate that the interpretation strategies taught to rely heavily on Cattell–Horn–Carroll theory and related interpretive frameworks (e.g., cross‐battery assessment). Additionally, instructors are placing greater emphasis on multicultural sensitivity/culturally and linguistically diverse assessment than in previous decades. Implications for future research, training, and practice are discussed.
School‐based practitioners are to implement and report functional behavior assessments (FBAs) that are consistent with both the research literature and the law, both federal and state. However, the literature regarding how best to document the FBA process is underdeveloped. A review of applied behavior analytic and school psychology literature as well as legislation and case law informs FBA reporting practices regarding (a) required content, (b) graphing of data, (c) organization of reports, and (d) clarity of language. The purpose of FBA reports is discussed, and this discussion explicitly informs the use of a solution‐focused approach to improve clarity in report writing. Recommendations for best practices in FBA report writing are provided, and future research needs are highlighted.
Repeated measurements of student ability (i.e., progressing monitoring) is an essential element of informed decision-making when adjusting instruction. An important characteristic of progress monitoring measures is frequent administration to identify areas of concern and to evaluate academic growth. The purpose of this study was to determine if STAR Math is sensitive to small incremental growth across a semester. Within two southern school districts, 114 fifth grade students’ progress monitoring data were collected weekly, and a latent growth curve was used to estimate students’ change in math ability. Results indicated STAR Math is sensitive to small incremental growth, with a statistically significant and positive slope, suggesting students using STAR Math showed improvement in ability over the semester.
Surveys reveal that many school psychologists continue to employ cognitive profile analysis despite the long-standing history of negative research results from this class of practice. This begets the question: why do questionable assessment practices persist in school psychology? To provide insight on this dilemma, this article presents the results of a content analysis of available interpretive resources in the clinical assessment literature that may shed insight on this issue. Although previous reviews have evaluated the content of individual assessment courses, this is the first systematic review of pedagogical resources frequently adopted in reading lists by course instructors. The interpretive guidance offered across tests within these texts was largely homogenous emphasizing the primary interpretation of subscale scores, de-emphasizing interpretation of global composites (i.e., FSIQ), and advocating for the use of some variant of profile analysis to interpret scores and score profiles. Implications for advancing evidence-based assessment in school psychology training and guarding against unwarranted unsupported claims in clinical assessment is discussed.
Decision-makers in school psychology are presently engaged in the process of determining how to, if possible, move forward with conducting mandated psychoeducational evaluations of students in schools during the pandemic. Whereas prominent organizations within the profession (e.g., American Psychological Association, National Association of School Psychologists) have issued guidance and encouraged practitioners to delay testing, it is not clear whether that is a viable option in every jurisdiction. Accordingly, professionals are now considering the potential use of telehealth platforms to conduct assessments, in some form, as we move forward and deal with this crisis. The goal of this brief commentary is to raise some provisional limitations associated with the use of telehealth to conduct psychological assessments that we believe will have to be considered as use of these platforms is debated. Recommendations for professional practice are also provided.