Assessing Impacts of Math in Focus, a ‘Singapore Math’ Program for American Schools: A Report of Findings from a Randomized Control Trial

2012 
Houghton Mifflin Harcourt (HMH) contracted with Empirical Education Inc. to conduct a one-year randomized control trial (RCT) aimed at producing evidence of the effectiveness of Math in Focus™ (MIF) for third, fourth, and fifth grade students. We report here on the final results of this research that began in Clark County School District, Nevada, in August 2011. The Math in Focus curriculum provides elementary math instruction based on the pedagogical approach used in Singapore, typified by a carefully sequenced and paced instructional style that focuses on fewer topics in greater depth at each grade level to ensure mastery. According to HMH, it is a “concrete to pictorial to abstract” (CPA) approach to instruction that is designed to support conceptual understanding. The instruction centers on problem solving using multiple models to help students visualize and understand math. HMH reports that the MIF curriculum is also closely aligned with the Common Core State Standards (CCSS). “For over a decade, research studies of mathematics education in high-performing countries have pointed to the conclusion that the mathematics curriculum in the United States must become substantially more focused and coherent in order to improve mathematics achievement in this country. To deliver on the promise of common standards, the standards must address the problem of a curriculum that is ‘a mile wide and an inch deep.’ [The CCSS] are a substantial answer to that challenge” (CCSS Initiative, n.d.). HMH reports that MIF is closely aligned with the CCSS, which focuses more on in-depth learning than previous math standards did. The in-depth content provides for greater focus on math concepts and problem solving. This difference between the CCSS-oriented MIF and the existing Nevada math standards and content lead us to specific expectations about where the major impacts will be seen. We expect MIF students to perform comparatively better than other students on achievement measures that emphasize depth, and comparatively worse than other students on achievement measures that emphasize breadth. Likewise, we expect MIF students to perform better on a test that measures complex problem solving skills, and not as well on a test that measures multiple procedural or computation skills. Because the Nevada state math standards have not fully shifted over to the CCSS, and because the state’s assessment tests students on procedural skills as well as strategic thinking and problem solving skills, we do not expect a positive impact of MIF on student state test performance (State of Nevada Department of Education, n.d.). We used three measures of math achievement: Stanford Achievement Test 10 (SAT 10), which has two sections: Problem Solving and Procedures, and Nevada’s Criterion Referenced Test (CRT). Given the mapping of these tests to the characteristics of MIF, we address the following primary research question: Do students who belong to grade-level teams randomly assigned to be given the MIF curriculum and professional development perform differently on tests of math achievement than students in teams not randomly assigned to receive the MIF curriculum and training? More specifically we expect a positive impact on the Problem Solving section of SAT 10. We expect a smaller or no impact on SAT 10 Procedures and the CRT. We also address the following secondary questions: Is MIF differentially effective in its impact on student achievement depending on the minority status of the student? Does MIF lead to a decrease in the percentage of math standards covered, and if so, does this impact account for the effects of MIF on student math achievement?In addition to addressing these questions, this study documents how MIF was implemented and reports on teacher satisfaction with the program. For this experimental study, we worked with HMH to recruit 12 schools with grades 3, 4, and 5. For most schools, grades 4 and 5 were identified as one team to be randomized, and grade 3 formed the other team. A coin toss determined which randomized team would join the MIF group (the program group trained on MIF) and which would join the control group (the one receiving ‘business as usual’). Technically, each school constituted a randomized block, with the two randomized teams (grades 4 and 5 in one team, and grade 3 in the other) forming a matched pair. For the schools that did not have a participating grade 3, we would randomize grades 4 and 5 into two different groups, one grade-level team would be randomized to the MIF group, and the other would be randomized to the control group. Altogether we randomized 22 grade-level teams, 12 of which were assigned to MIF, and 10 of which were assigned to control. An RCT eliminates a variety of biases that could otherwise compromise the validity of the research. For example, it ensures that teachers in both groups were not selected on the basis of their interest in trying MIF and in their ability to take advantage of the new program. Random assignment to experimental conditions does not, however, assure that we can generalize the results beyond the district where the research was conducted. We designed our study to provide useful information that will support local decision making by taking into account the specifics of district characteristics and details of local implementation. The results are not applicable to school districts with practices and populations different from those in this experiment. This report provides a rich description of the conditions of the implementation to provide the reader with an understanding of the context for our findings.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    1
    Citations
    NaN
    KQI
    []