language-icon Old Web
English
Sign In

Residency Redesign: Much to Do

2014 
The authors of “Financing Residency Training Redesign”1 are to be commended as leaders of the exceptional Preparing the Personal Physician for Practice (P4) project funded by the Association of Family Medicine Residency Directors and the American Board of Family Medicine Foundation.2 They have taken the study of residency redesign to an important next level by estimating costs and finding that the cost of innovation represented a small increase above the baseline cost of training. At least 2 other specialties have evaluated the content and structure of resident education at the same time as the P4 project in family medicine. The American Board of Pediatrics Foundation funded an in-depth analysis of pediatric residency training, the Residency Review and Redesign in Pediatrics (R3P) project,3 and the results contributed to changes in the Accreditation Council for Graduate Medical Education (ACGME) Program Requirements for Pediatrics.4 Somewhat earlier, the Educational Innovation Project (EIP) of the ACGME began its exploration of innovative change by inviting selected internal medicine residency programs to innovate and collect information on their experience.5 The EIP produced a collaborating network of participating programs that shared their innovations, enabled the elimination of prescriptive program requirements for all internal medicine programs, and heralded elements of the ACGME's Next Accreditation System.6 Neither the redesign effort in pediatrics nor that in internal medicine programs produced published data on program-level costs of these initiatives. Taking a step back to admire these initiatives makes one aware of how rare they are. As ambitious and important as these initiatives were, viewed from the standpoint of graduate medical education (GME) as a discipline, they stand like struggling trees on a barren plain. As a result of the lack of funding for research in GME, many of the changes instituted are based on expert opinion rather than on rigorously evaluated theory-based hypotheses.7 Programs have generally lacked the resources and expertise needed to evaluate the consequences of program changes on learner or patient outcomes, or later patterns of practice in graduating residents. In addition, funds for the faculty development in medical education, needed to facilitate evaluation and ongoing innovation, are limited. The need for faculty development in pediatrics was evident in the quality of applications submitted as part of a follow-up program that solicited proposals for innovative projects to implement recommendations made by the R3P project.8 Enthusiastic proposals were seldom matched by the skills and resources needed to carry them out. Available data suggest that it would be unwise to allow GME to evolve slowly on its own. Major maternal complications among obstetricians cluster according to the residency programs in which they trained, with complications encountered by patients of graduates of the lowest-performing quintile of residency programs were approximately one-third more frequent than complications in patients cared for by graduates of the highest-performing quintiles.9,10 Initial skill levels improve only so much with experience. As noted by the authors, “Over 15 years, the physicians who start out with relatively poor outcomes never quite catch up to everybody else, meaning that the impact of initial skill persists.”10 As we were writing this commentary, Asch and Weinstein11 published their excellent commentary on the Institute of Medicine (IOM) report on governance and financing of GME.12 They note that the “evidence base available to inform future directions for the substance, organization, and financing of GME is quite limited.” A major reason is limited funds, in contrast, they noted, to funding for biomedical research. Asch and Weinstein suggest important questions that must be explored, including whether GME should be time-based or competency-based and how competence should be ensured during many years of practice. To those, one could add: If assessment is competency-based, how should competency be assessed? To facilitate assessment, should professional activities be bundled into entrustable professional activities (EPAs)?13 If the utility of EPAs is to divide a specialty's practice into commonsense categories, how broad should categories be? It is easier to conceive how one might assess an EPA such as “the care of uncomplicated pregnancies,”13 than one such as “apply public health principles and improvement methodology to improve care for populations, communities, and systems.”14 For the latter, how much sampling would be needed to draw reliable inferences about ability? Is the latter an EPA as envisioned by ten Cate and Scheele intended to facilitate assessment in the professional workplace,13,15 or is it a competency or quality applicable to a number of EPAs? It is worth noting that underinvestment in medical education research is just 1 instance of broader underinvestment in education research. As a result, concepts demonstrated in the laboratory have been slow to translate into practice.16,17 The utility of testing, also called “retrieval practice,” was first described as a means of improving memory and learning more than a century ago.16,18 Yet tests continue to be generally regarded by both teachers and students as no more than instruments for assessment.16,19 How might low-stakes testing be used to enhance resident learning? The testing effect is enhanced by what educational psychologists have termed a spacing effect, which is another well-established concept requiring translational research.16,17,20 The spacing effect is responsible for the observation that immersion (or massed) learning is not as well retained as learning in smaller doses spaced at intervals over time.16,17,21 Finally, subject matter appears to be better retained when topics are taught together, termed “interleaving” of topics by educational psychologists, rather than 1 at a time.16,17 Holmboe et al22 have challenged GME to confront assumptions underlying the rotational approach to medical education. They noted that frequent rotations support neither the social learning so important to contemporary practice nor the faculty-learner relationships vital to assessment and feedback. We suggest that one should add the need to examine assumptions about the efficiency of learning during rotational immersions in single topics and sites of practice. Perhaps learning would be more durable if GME were to adopt, at least in part, the longitudinal integrated curricula increasingly found in undergraduate medical education.22,23 For more than a century, researchers have known that profound “forgetting” occurs after 1-time learning experiences.16 Yet the outcomes-oriented Next Accreditation System requires “regularly scheduled didactic sessions.”24 The Next Accreditation System suggests that specialties and subspecialties should conceptualize acquisition of competencies as a sequence of developmental stages or Milestones. The idea is that Milestones can be used to “create a logical trajectory of professional development in essential elements of competency and meet criteria for effective assessment, including feasibility, demonstration of beneficial effect on learning, and acceptability in the community.”6 Without research we will not know if this approach, appealing as it is in concept, will contribute positively to patient outcomes and the professional development of future practitioners. Is a comprehensive program of research in GME practical? It would seem to be. Although it would cost more than the 2% to 3% over baseline program costs estimated for family medicine, the incremental cost as a percentage of total health care costs would be vanishingly small. Medicare GME funding represents less than 2% of total Medicare spending,11 itself only a fraction of total health care spending. The Transformation Fund recently suggested by the IOM would support innovation in the organization and financing of GME.11 It would also provide funds for innovative programs and funds for the research needed to acquire validity evidence for GME performance measures. Although this proposal was immediately debated within academic medicine,25 and will surely be debated in Congress, its future is uncertain. Even if this new source of funds does not come to pass, the IOM GME report has done an important service by spotlighting the need for comprehensive, systematic evaluation of methods, outcomes, and costs of GME.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    22
    References
    0
    Citations
    NaN
    KQI
    []