One way that a professional training program can choose to manage a clinically incompetent student was to let them proceed through to the standardized certification exam. If the student was truly incompetent, then the student would fail the exam. The program would not have to fail the student by itself. While this was not a pleasant practice, neither was failing a student who had been in a training program for years.

Such is the "Failure to Fail" phenomenon, a term that was frequently mentioned at the Ottawa 2022 and AMEE 2022 conferences. Defined as a teacher’s reluctance to fail an incompetent student, this phenomenon has many underlying reasons. One of the reasons is as mentioned above: to avoid a difficult situation, a teacher or a training program permits an incompetent student to pass, with the hope that an external certification test will fail the student for them.

This term was frequently mentioned during the conferences as many in-person, high-stakes, national-level clinical examinations have been canceled or put on hold due to the pandemic. Thus, some training programs are facing a new reality: they must deliver in-house clinical examinations in place of the national-level clinical examinations. Furthermore, programs who used to permit marginally incompetent or incompetent students to pass now must fail those students because they can no longer expect the national licensing examinations to do the job for them. With the USMLE Step 2 Clinical Skills exam permanently canceled, it seems large-scale national-level clinical examinations may now be a thing of the past.

As national licensing examinations could serve as public proof of competence, could work as external quality auditing checks, and would provide some legal recourse. Programs may now need in-house examinations to fulfill the same roles. Importantly, will society accept graduates who have never been tested in a national-level clinical examination? And can a training program ensure clinical competency without external checks?

Programmatic assessment is another frequently mentioned term at the conferences, and might provide some answers.  Programmatic Assessment describes both a philosophical approach and a method of assessment. In Programmatic Assessment, test standardization and test security are not prioritized. Instead, learners are assessed in context using assessment tools that provide authenticity. High-stakes examinations are also limited, and ranking is not a goal. Rather, learners are tracked over time, assessed continually, and receive rich feedback after every assessment session to improve their learning. This approach is congruent with the concept of "Assessment for Learning," which is in contrast to "Assessment of Learning," normally found in standardized testing. Compared to the "traditional testing approach," programmatic assessment is a more sensible approach to promoting mastery learning because students regularly receive feedback and opportunities to improve themselves, every professional competency is given a proper weighting, and the assessment is done in context rather than in artificial settings.

If programmatic assessment is effectively implemented, learner progression will be documented in detail. Each learner will have a clear track record of competence over time, their performance in assessments to date, and competence at the end of their training. This process should be enough to convince the public that a trainee is competent to practice at the expected level.

As for ensuring competency without external checks, Programmatic Assessment is arguably better suited for high-stake decision-making than a national examination. According to the Ottawa 2020 consensus statement for programmatic assessment, decision-making in programmatic assessment must be done after an adequate number of data points has been collected over time, in multiple settings, by multiple assessors, using multiple assessment methods, and ultimately carried out by a clinical competence committee who knows the residents well. Compared to the national examination which uses a single setting and limited number of assessment methods all done in one day, programmatic assessment clearly has the advantage. If the national standard is clear and an institution is knowledgeable in programmatic assessment, a simple audit to ensure that programmatic assessment has been done right may even be a stronger proof of quality than a national examination.

The effective planning and implementation of programmatic assessment takes time. Many institutions have struggled with implementation. Even programs that have implemented programmatic assessment must still continually learn to problem solve. However, the benefits to all stakeholders are clear , and numerous great resources are available to program leaders and institutions.

Did you know that the Harvard Macy Institute Community Blog has had more than 335 posts? Previous blog posts have explored topics including faculty development trends, using clear verbal communication, and micro-teaching.

Atipong Pathanasethpong, MD, MS

Atipong Pathanasethpong, MD MMSc (MedEd) (Educators ’15, Leaders ‘15) is a graduate of the MMSc in Medical Education Program at Harvard Medical School. Atipong works as an anesthesiologist and medical educator at Faculty of Medicine, Khon Kaen University, Thailand. He is currently active in instructional design and in disseminating cognitive science concepts to his trainees and colleagues. You can reach Atipong via Twitter